Abstract: The problem of FIR system parameter estimation has been considered in the paper. A new robust recursive algorithm for simultaneously estimation of parameters and scale factor of prediction residuals in non-stationary environment corrupted by impulsive noise has been proposed. The performance of derived algorithm has been tested by simulations.
Abstract: The symmetric solution set Σ sym is the set of all solutions to the linear systems Ax = b, where A is symmetric and lies between some given bounds A and A, and b lies between b and b. We present a contractor for Σ sym, which is an iterative method that starts with some initial enclosure of Σ sym (by means of a cartesian product of intervals) and sequentially makes the enclosure tighter. Our contractor is based on polyhedral approximation and solving a series of linear programs. Even though it does not converge to the optimal bounds in general, it may significantly reduce the overestimation. The efficiency is discussed by a number of numerical experiments.
Abstract: In this paper, we apply and compare two generalized estimating equation approaches to the analysis of car breakdowns data in Mauritius. Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observation as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use the two types of quasi-likelihood estimation approaches to estimate the parameters of the model: marginal and joint generalized quasi-likelihood estimating equation approaches. Under-dispersion parameter is estimated to be around 2.14 justifying the appropriateness of Com-Poisson distribution in modelling underdispersed count responses recorded in this study.
Abstract: The major problem that wireless communication
systems undergo is multipath fading caused by scattering of the
transmitted signal. However, we can treat multipath propagation as
multiple channels between the transmitter and receiver to improve
the signal-to-scattering-noise ratio. While using Single Input
Multiple Output (SIMO) systems, the diversity receivers extract
multiple signal branches or copies of the same signal received from
different channels and apply gain combining schemes such as Root
Mean Square Gain Combining (RMSGC). RMSGC asymptotically
yields an identical performance to that of the theoretically optimal
Maximum Ratio Combining (MRC) for values of mean Signal-to-
Noise-Ratio (SNR) above a certain threshold value without the need
for SNR estimation. This paper introduces an improvement of
RMSGC using two different issues. We found that post-detection and
de-noising the received signals improve the performance of RMSGC
and lower the threshold SNR.
Abstract: The identification and elimination of bad
measurements is one of the basic functions of a robust state estimator
as bad data have the effect of corrupting the results of state
estimation according to the popular weighted least squares method.
However this is a difficult problem to handle especially when dealing
with multiple errors from the interactive conforming type. In this
paper, a self adaptive genetic based algorithm is proposed. The
algorithm utilizes the results of the classical linearized normal
residuals approach to tune the genetic operators thus instead of
making a randomized search throughout the whole search space it is
more likely to be a directed search thus the optimum solution is
obtained at very early stages(maximum of 5 generations). The
algorithm utilizes the accumulating databases of already computed
cases to reduce the computational burden to minimum. Tests are
conducted with reference to the standard IEEE test systems. Test
results are very promising.
Abstract: The objective of this study is to introduce estimators to the parameters and survival function for Weibull distribution using three different methods, Maximum Likelihood estimation, Standard Bayes estimation and Modified Bayes estimation. We will then compared the three methods using simulation study to find the best one base on MPE and MSE.
Abstract: A novel algorithm for construct a seamless video mosaic of the entire panorama continuously by automatically analyzing and managing feature points, including management of quantity and quality, from the sequence is presented. Since a video contains significant redundancy, so that not all consecutive video images are required to create a mosaic. Only some key images need to be selected. Meanwhile, feature-based methods for mosaicing rely on correction of feature points? correspondence deeply, and if the key images have large frame interval, the mosaic will often be interrupted by the scarcity of corresponding feature points. A unique character of the method is its ability to handle all the problems above in video mosaicing. Experiments have been performed under various conditions, the results show that our method could achieve fast and accurate video mosaic construction. Keywords?video mosaic, feature points management, homography estimation.
Abstract: Loop detectors report traffic characteristics in real
time. They are at the core of traffic control process. Intuitively,
one would expect that as density of detection increases, so would
the quality of estimates derived from detector data. However, as
detector deployment increases, the associated operating and
maintenance cost increases. Thus, traffic agencies often need to
decide where to add new detectors and which detectors should
continue receiving maintenance, given their resource constraints.
This paper evaluates the effect of detector spacing on freeway
travel time estimation. A freeway section (Interstate-15) in Salt
Lake City metropolitan region is examined. The research reveals
that travel time accuracy does not necessarily deteriorate with
increased detector spacing. Rather, the actual location of detectors
has far greater influence on the quality of travel time estimates.
The study presents an innovative computational approach that
delivers optimal detector locations through a process that relies on
Genetic Algorithm formulation.
Abstract: This paper deals with the extraction of information from the experts to automatically identify and recognize Ganoderma infection in oil palm stem using tomography images. Expert-s knowledge are used as rules in a Fuzzy Inference Systems to classify each individual patterns observed in he tomography image. The classification is done by defining membership functions which assigned a set of three possible hypotheses : Ganoderma infection (G), non Ganoderma infection (N) or intact stem tissue (I) to every abnormalities pattern found in the tomography image. A complete comparison between Mamdani and Sugeno style,triangular, trapezoids and mixed triangular-trapezoids membership functions and different methods of aggregation and defuzzification is also presented and analyzed to select suitable Fuzzy Inference System methods to perform the above mentioned task. The results showed that seven out of 30 initial possible combination of available Fuzzy Inference methods in MATLAB Fuzzy Toolbox were observed giving result close to the experts estimation.
Abstract: Active Power Filters (APFs) are today the most
widely used systems to eliminate harmonics compensate power
factor and correct unbalanced problems in industrial power plants.
We propose to improve the performances of conventional APFs by
using artificial neural networks (ANNs) for harmonics estimation.
This new method combines both the strategies for extracting the
three-phase reference currents for active power filters and DC link
voltage control method. The ANNs learning capabilities to
adaptively choose the power system parameters for both to compute
the reference currents and to recharge the capacitor value requested
by VDC voltage in order to ensure suitable transit of powers to
supply the inverter. To investigate the performance of this
identification method, the study has been accomplished using
simulation with the MATLAB Simulink Power System Toolbox. The
simulation study results of the new (SAPF) identification technique
compared to other similar methods are found quite satisfactory by
assuring good filtering characteristics and high system stability.
Abstract: In MPEG and H.26x standards, to eliminate the
temporal redundancy we use motion estimation. Given that the
motion estimation stage is very complex in terms of computational
effort, a hardware implementation on a re-configurable circuit is
crucial for the requirements of different real time multimedia
applications. In this paper, we present hardware architecture for
motion estimation based on "Full Search Block Matching" (FSBM)
algorithm. This architecture presents minimum latency, maximum
throughput, full utilization of hardware resources such as embedded
memory blocks, and combining both pipelining and parallel
processing techniques. Our design is described in VHDL language,
verified by simulation and implemented in a Stratix II
EP2S130F1020C4 FPGA circuit. The experiment result show that the
optimum operating clock frequency of the proposed design is 89MHz
which achieves 160M pixels/sec.
Abstract: This paper addresses parameter and state estimation problem in the presence of the perturbation of observer gain bounded input disturbances for the Lipschitz systems that are linear in unknown parameters and nonlinear in states. A new nonlinear adaptive resilient observer is designed, and its stability conditions based on Lyapunov technique are derived. The gain for this observer is derived systematically using linear matrix inequality approach. A numerical example is provided in which the nonlinear terms depend on unmeasured states. The simulation results are presented to show the effectiveness of the proposed method.
Abstract: In this paper, we propose a novel frequency offset
estimation scheme for orthogonal frequency division multiplexing
(OFDM) systems. By correlating the OFDM signals within the coherence
phase bandwidth and employing a threshold in the frequency
offset estimation process, the proposed scheme is not only robust to
the timing offset but also has a reduced complexity compared with
that of the conventional scheme. Moreover, a timing offset estimation
scheme is also proposed as the next stage of the proposed frequency
offset estimation. Numerical results show that the proposed scheme
can estimate frequency offset with lower computational complexity
and does not require additional memory while maintaining the same
level of estimation performance.
Abstract: This paper develops a quality estimation method with
the application of fuzzy hierarchical clustering. Quality estimation is
essential to quality control and quality improvement as a precise
estimation can promote a right decision-making in order to help
better quality control. Normally the quality of finished products in
manufacturing system can be differentiated by quality standards. In
the real life situation, the collected data may be vague which is not
easy to be classified and they are usually represented in term of fuzzy
number. To estimate the quality of product presented by fuzzy
number is not easy. In this research, the trapezoidal fuzzy numbers
are collected in manufacturing process and classify the collected data
into different clusters so as to get the estimation. Since normal
hierarchical clustering methods can only be applied for real numbers,
fuzzy hierarchical clustering is selected to handle this problem based
on quality standards.
Abstract: This paper presents the methodology from machine
learning approaches for short-term rain forecasting system. Decision
Tree, Artificial Neural Network (ANN), and Support Vector Machine
(SVM) were applied to develop classification and prediction models
for rainfall forecasts. The goals of this presentation are to
demonstrate (1) how feature selection can be used to identify the
relationships between rainfall occurrences and other weather
conditions and (2) what models can be developed and deployed for
predicting the accurate rainfall estimates to support the decisions to
launch the cloud seeding operations in the northeastern part of
Thailand. Datasets collected during 2004-2006 from the
Chalermprakiat Royal Rain Making Research Center at Hua Hin,
Prachuap Khiri khan, the Chalermprakiat Royal Rain Making
Research Center at Pimai, Nakhon Ratchasima and Thai
Meteorological Department (TMD). A total of 179 records with 57
features was merged and matched by unique date. There are three
main parts in this work. Firstly, a decision tree induction algorithm
(C4.5) was used to classify the rain status into either rain or no-rain.
The overall accuracy of classification tree achieves 94.41% with the
five-fold cross validation. The C4.5 algorithm was also used to
classify the rain amount into three classes as no-rain (0-0.1 mm.),
few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall
accuracy of classification tree achieves 62.57%. Secondly, an ANN
was applied to predict the rainfall amount and the root mean square
error (RMSE) were used to measure the training and testing errors of
the ANN. It is found that the ANN yields a lower RMSE at 0.171 for
daily rainfall estimates, when compared to next-day and next-2-day
estimation. Thirdly, the ANN and SVM techniques were also used to
classify the rain amount into three classes as no-rain, few-rain, and
moderate-rain as above. The results achieved in 68.15% and 69.10%
of overall accuracy of same-day prediction for the ANN and SVM
models, respectively. The obtained results illustrated the comparison
of the predictive power of different methods for rainfall estimation.
Abstract: Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.
Abstract: This paper considers the problem of Null-Steering beamforming using Neural Network (NN) approach for antenna array system. Two cases are presented. First, unlike the other authors, the estimated Direction Of Arrivals (DOAs) are used for antenna array weights NN-based determination and the imprecise DOAs estimations are taken into account. Second, the blind null-steering beamforming is presented. In this case the antenna array outputs are presented at the input of the NN without DOAs estimation. The results of computer simulations will show much better relative mean error performances of the first NN approach compared to the NNbased blind beamforming.
Abstract: Speed sensorless systems are intensively studied during recent years; this is mainly due to their economical benefit and fragility of mechanical sensors and also the difficulty of installing this type of sensor in many applications. These systems suffer from instability problems and sensitivity to parameter mismatch at low speed operation. In this paper an analysis of adaptive observer stability with stator resistance estimation is given.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.
Abstract: Majority of Business Software Systems (BSS)
Development and Enhancement Projects (D&EP) fail to meet criteria
of their effectiveness, what leads to the considerable financial losses.
One of the fundamental reasons for such projects- exceptionally low
success rate are improperly derived estimates for their costs and time.
In the case of BSS D&EP these attributes are determined by the work
effort, meanwhile reliable and objective effort estimation still appears
to be a great challenge to the software engineering. Thus this paper is
aimed at presenting the most important synthetic conclusions coming
from the author-s own studies concerning the main factors of
effective BSS D&EP work effort estimation. Thanks to the rational
investment decisions made on the basis of reliable and objective
criteria it is possible to reduce losses caused not only by abandoned
projects but also by large scale of overrunning the time and costs of
BSS D&EP execution.