Abstract: German electricity European options on futures using
Lévy processes for the underlying asset are examined. Implied
volatility evolution, under each of the considered models, is
discussed after calibrating for the Merton jump diffusion (MJD),
variance gamma (VG), normal inverse Gaussian (NIG), Carr, Geman,
Madan and Yor (CGMY) and the Black and Scholes (B&S) model.
Implied volatility is examined for the entire sample period, revealing
some curious features about market evolution, where data fitting
performances of the five models are compared. It is shown that
variance gamma processes provide relatively better results and that
implied volatility shows significant differences through time, having
increasingly evolved. Volatility changes for changed uncertainty, or
else, increasing futures prices and there is evidence for the need to
account for seasonality when modelling both electricity spot/futures
prices and volatility.
Abstract: We proposed a new class of asymmetric turbo encoder for 3G systems that performs well in both “water fall" and “error floor" regions in [7]. In this paper, a modified (optimal) power allocation scheme for the different bits of new class of asymmetric turbo encoder has been investigated to enhance the performance. The simulation results and performance bound for proposed asymmetric turbo code with modified Unequal Power Allocation (UPA) scheme for the frame length, N=400, code rate, r=1/3 with Log-MAP decoder over Additive White Gaussian Noise (AWGN) channel are obtained and compared with the system with typical UPA and without UPA. The performance tests are extended over AWGN channel for different frame size to verify the possibility of implementation of the modified UPA scheme for the proposed asymmetric turbo code. From the performance results, it is observed that the proposed asymmetric turbo code with modified UPA performs better than the system without UPA and with typical UPA and it provides a coding gain of 0.4 to 0.52dB.
Abstract: In the paper, a fast high-resolution range profile synthetic algorithm called orthogonal matching pursuit with sensing dictionary (OMP-SD) is proposed. It formulates the traditional HRRP synthetic to be a sparse approximation problem over redundant dictionary. As it employs a priori that the synthetic range profile (SRP) of targets are sparse, SRP can be accomplished even in presence of data lost. Besides, the computation complexity decreases from O(MNDK) flops for OMP to O(M(N + D)K) flops for OMP-SD by introducing sensing dictionary (SD). Simulation experiments illustrate its advantages both in additive white Gaussian noise (AWGN) and noiseless situation, respectively.
Abstract: The link between Gröbner basis and linear algebra was
described by Lazard [4,5] where he realized the Gr¨obner basis
computation could be archived by applying Gaussian elimination over
Macaulay-s matrix .
In this paper, we indicate how same technique may be used to
SAGBI- Gröbner basis computations in invariant rings.
Abstract: Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.
Abstract: In this paper, we use Radial Basis Function Networks
(RBFN) for solving the problem of environmental interference
cancellation of speech signal. We show that the Second Order Thin-
Plate Spline (SOTPS) kernel cancels the interferences effectively.
For make comparison, we test our experiments on two conventional
most used RBFN kernels: the Gaussian and First order TPS (FOTPS)
basis functions. The speech signals used here were taken from the
OGI Multi-Language Telephone Speech Corpus database and were
corrupted with six type of environmental noise from NOISEX-92
database. Experimental results show that the SOTPS kernel can
considerably outperform the Gaussian and FOTPS functions on
speech interference cancellation problem.
Abstract: When the failure function is monotone, some monotonic reliability methods are used to gratefully simplify and facilitate the reliability computations. However, these methods often work in a transformed iso-probabilistic space. To this end, a monotonic simulator or transformation is needed in order that the transformed failure function is still monotone. This note proves at first that the output distribution of failure function is invariant under the transformation. And then it presents some conditions under which the transformed function is still monotone in the newly obtained space. These concern the copulas and the dependence concepts. In many engineering applications, the Gaussian copulas are often used to approximate the real word copulas while the available information on the random variables is limited to the set of marginal distributions and the covariances. So this note catches an importance on the conditional monotonicity of the often used transformation from an independent random vector into a dependent random vector with Gaussian copulas.
Abstract: In this paper we propose and examine an Adaptive
Neuro-Fuzzy Inference System (ANFIS) in Smoothing Transition
Autoregressive (STAR) modeling. Because STAR models follow
fuzzy logic approach, in the non-linear part fuzzy rules can be
incorporated or other training or computational methods can be
applied as the error backpropagation algorithm instead to nonlinear
squares. Furthermore, additional fuzzy membership functions can be
examined, beside the logistic and exponential, like the triangle,
Gaussian and Generalized Bell functions among others. We examine
two macroeconomic variables of US economy, the inflation rate and
the 6-monthly treasury bills interest rates.
Abstract: Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.
Abstract: Financial forecasting using machine learning techniques has received great efforts in the last decide . In this ongoing work, we show how machine learning of graphical models will be able to infer a visualized causal interactions between different banks in the Saudi equities market. One important discovery from such learned causal graphs is how companies influence each other and to what extend. In this work, a set of graphical models named Gaussian graphical models with developed ensemble penalized feature selection methods that combine ; filtering method, wrapper method and a regularizer will be shown. A comparison between these different developed ensemble combinations will also be shown. The best ensemble method will be used to infer the causal relationships between banks in Saudi equities market.
Abstract: In this paper, an Arabic letter recognition system based on Artificial Neural Networks (ANNs) and statistical analysis for feature extraction is presented. The ANN is trained using the Least Mean Squares (LMS) algorithm. In the proposed system, each typed Arabic letter is represented by a matrix of binary numbers that are used as input to a simple feature extraction system whose output, in addition to the input matrix, are fed to an ANN. Simulation results are provided and show that the proposed system always produces a lower Mean Squared Error (MSE) and higher success rates than the current ANN solutions.
Abstract: In this paper, frequency offset (FO) estimation schemes
robust to the non-Gaussian noise environments are proposed for
orthogonal frequency division multiplexing (OFDM) systems. First,
a maximum-likelihood (ML) estimation scheme in non-Gaussian
noise environments is proposed, and then, the complexity of the
ML estimation scheme is reduced by employing a reduced set of
candidate values. In numerical results, it is demonstrated that the
proposed schemes provide a significant performance improvement
over the conventional estimation scheme in non-Gaussian noise
environments while maintaining the performance similar to the
estimation performance in Gaussian noise environments.
Abstract: This paper presents an evaluation for a wavelet-based
digital watermarking technique used in estimating the quality of
video sequences transmitted over Additive White Gaussian Noise
(AWGN) channel in terms of a classical objective metric, such as
Peak Signal-to-Noise Ratio (PSNR) without the need of the original
video. In this method, a watermark is embedded into the Discrete
Wavelet Transform (DWT) domain of the original video frames
using a quantization method. The degradation of the extracted
watermark can be used to estimate the video quality in terms of
PSNR with good accuracy. We calculated PSNR for video frames
contaminated with AWGN and compared the values with those
estimated using the Watermarking-DWT based approach. It is found
that the calculated and estimated quality measures of the video
frames are highly correlated, suggesting that this method can provide
a good quality measure for video frames transmitted over AWGN
channel without the need of the original video.
Abstract: Segmentation, filtering out of measurement errors and
identification of breakpoints are integral parts of any analysis of
microarray data for the detection of copy number variation (CNV).
Existing algorithms designed for these tasks have had some successes
in the past, but they tend to be O(N2) in either computation time or
memory requirement, or both, and the rapid advance of microarray
resolution has practically rendered such algorithms useless. Here we
propose an algorithm, SAD, that is much faster and much less thirsty
for memory – O(N) in both computation time and memory requirement
-- and offers higher accuracy. The two key ingredients of SAD are the
fundamental assumption in statistics that measurement errors are
normally distributed and the mathematical relation that the product of
two Gaussians is another Gaussian (function). We have produced a
computer program for analyzing CNV based on SAD. In addition to
being fast and small it offers two important features: quantitative
statistics for predictions and, with only two user-decided parameters,
ease of use. Its speed shows little dependence on genomic profile.
Running on an average modern computer, it completes CNV analyses
for a 262 thousand-probe array in ~1 second and a 1.8 million-probe
array in 9 seconds
Abstract: Polystyrene particles of different sizes are optically
trapped with a gaussian beam from a He-Cd laser operating at 442
nm. The particles are observed to exhibit luminescence after a certain
trapping time followed by an escape from the optical trap. The
observed luminescence is explained in terms of the photodegradation
of the polystyrene backbone. It is speculated that these chemical
modifications also play a role for the escape of the particles from the
trap. Variations of the particle size and the laser power show that
these parameters have a great influence on the observed phenomena.
Abstract: The aim of this research is to develop a fast and
reliable surveillance system based on a personal digital assistant
(PDA) device. This is to extend the capability of the device to detect
moving objects which is already available in personal computers.
Secondly, to compare the performance between Background
subtraction (BS) and Temporal Frame Differencing (TFD) techniques
for PDA platform as to which is more suitable. In order to reduce
noise and to prepare frames for the moving object detection part,
each frame is first converted to a gray-scale representation and then
smoothed using a Gaussian low pass filter. Two moving object
detection schemes i.e., BS and TFD have been analyzed. The
background frame is updated by using Infinite Impulse Response
(IIR) filter so that the background frame is adapted to the varying
illuminate conditions and geometry settings. In order to reduce the
effect of noise pixels resulting from frame differencing
morphological filters erosion and dilation are applied. In this
research, it has been found that TFD technique is more suitable for
motion detection purpose than the BS in term of speed. On average
TFD is approximately 170 ms faster than the BS technique
Abstract: Based on the one-bit-matching principle and by turning the de-mixing matrix into an orthogonal matrix via certain normalization, Ma et al proposed a one-bit-matching learning algorithm on the Stiefel manifold for independent component analysis [8]. But this algorithm is not adaptive. In this paper, an algorithm which can extract kurtosis and its sign of each independent source component directly from observation data is firstly introduced.With the algorithm , the one-bit-matching learning algorithm is revised, so that it can make the blind separation on the Stiefel manifold implemented completely in the adaptive mode in the framework of natural gradient.
Abstract: This paper presents a new growing neural network for
cluster analysis and market segmentation, which optimizes the size
and structure of clusters by iteratively checking them for multivariate
normality. We combine the recently published SGNN approach [8]
with the basic principle underlying the Gaussian-means algorithm
[13] and the Mardia test for multivariate normality [18, 19]. The new
approach distinguishes from existing ones by its holistic design and
its great autonomy regarding the clustering process as a whole. Its
performance is demonstrated by means of synthetic 2D data and by
real lifestyle survey data usable for market segmentation.
Abstract: The passive electrical properties of a tissue depends
on the intrinsic constituents and its structure, therefore by measuring
the complex electrical impedance of the tissue it might be possible to
obtain indicators of the tissue state or physiological activity [1].
Complete bio-impedance information relative to physiology and
pathology of a human body and functional states of the body tissue or
organs can be extracted by using a technique containing a fourelectrode
measurement setup. This work presents the estimation
measurement setup based on the four-electrode technique. First, the
complex impedance is estimated by three different estimation
techniques: Fourier, Sine Correlation and Digital De-convolution and
then estimation errors for the magnitude, phase, reactance and
resistance are calculated and analyzed for different levels of
disturbances in the observations. The absolute values of relative
errors are plotted and the graphical performance of each technique is
compared.
Abstract: In this paper, we propose a new robust and secure
system that is based on the combination between two different
transforms Discrete wavelet Transform (DWT) and Contourlet
Transform (CT). The combined transforms will compensate the
drawback of using each transform separately. The proposed
algorithm has been designed, implemented and tested successfully.
The experimental results showed that selecting the best sub-band for
embedding from both transforms will improve the imperceptibility
and robustness of the new combined algorithm. The evaluated
imperceptibility of the combined DWT-CT algorithm which gave a
PSNR value 88.11 and the combination DWT-CT algorithm
improves robustness since it produced better robust against Gaussian
noise attack. In addition to that, the implemented system shored a
successful extraction method to extract watermark efficiently.