Abstract: A new algorithm called Character-Comparison to Character-Access (CCCA) is developed to test the effect of both: 1) converting character-comparison and number-comparison into character-access and 2) the starting point of checking on the performance of the checking operation in string searching. An experiment is performed using both English text and DNA text with different sizes. The results are compared with five algorithms, namely, Naive, BM, Inf_Suf_Pref, Raita, and Cycle. With the CCCA algorithm, the results suggest that the evaluation criteria of the average number of total comparisons are improved up to 35%. Furthermore, the results suggest that the clock time required by the other algorithms is improved in range from 22.13% to 42.33% by the new CCCA algorithm.
Abstract: Clustering is a very well known technique in data mining. One of the most widely used clustering techniques is the kmeans algorithm. Solutions obtained from this technique depend on the initialization of cluster centers and the final solution converges to local minima. In order to overcome K-means algorithm shortcomings, this paper proposes a hybrid evolutionary algorithm based on the combination of PSO, SA and K-means algorithms, called PSO-SA-K, which can find better cluster partition. The performance is evaluated through several benchmark data sets. The simulation results show that the proposed algorithm outperforms previous approaches, such as PSO, SA and K-means for partitional clustering problem.
Abstract: This paper presents an effective traffic lights
recognition method at the daytime. First, Potential Traffic Lights
Detector (PTLD) use whole color source of YCbCr channel image and
make each binary image of green and red traffic lights. After PTLD
step, Shape Filter (SF) use to remove noise such as traffic sign, street
tree, vehicle, and building. At this time, noise removal properties
consist of information of blobs of binary image; length, area, area of
boundary box, etc. Finally, after an intermediate association step witch
goal is to define relevant candidates region from the previously
detected traffic lights, Adaptive Multi-class Classifier (AMC) is
executed. The classification method uses Haar-like feature and
Adaboost algorithm. For simulation, we are implemented through Intel
Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and
rural roads. Through the test, we are compared with our method and
standard object-recognition learning processes and proved that it
reached up to 94 % of detection rate which is better than the results
achieved with cascade classifiers. Computation time of our proposed
method is 15 ms.
Abstract: The increasing importance of data stream arising in a
wide range of advanced applications has led to the extensive study of
mining frequent patterns. Mining data streams poses many new
challenges amongst which are the one-scan nature, the unbounded
memory requirement and the high arrival rate of data streams. In this
paper, we propose a new approach for mining itemsets on data
stream. Our approach SFIDS has been developed based on FIDS
algorithm. The main attempts were to keep some advantages of the
previous approach and resolve some of its drawbacks, and
consequently to improve run time and memory consumption. Our
approach has the following advantages: using a data structure similar
to lattice for keeping frequent itemsets, separating regions from each
other with deleting common nodes that results in a decrease in search
space, memory consumption and run time; and Finally, considering
CPU constraint, with increasing arrival rate of data that result in
overloading system, SFIDS automatically detect this situation and
discard some of unprocessing data. We guarantee that error of results
is bounded to user pre-specified threshold, based on a probability
technique. Final results show that SFIDS algorithm could attain
about 50% run time improvement than FIDS approach.
Abstract: The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Abstract: This paper presents the convergence analysis
of a prediction based blind equalizer for IIR channels.
Predictor parameters are estimated by using the recursive
least squares algorithm. It is shown that the prediction
error converges almost surely (a.s.) toward a scalar
multiple of the unknown input symbol sequence. It is
also proved that the convergence rate of the parameter
estimation error is of the same order as that in the iterated
logarithm law.
Abstract: The complex hybrid and nonlinear nature of many processes that are met in practice causes problems with both structure modelling and parameter identification; therefore, obtaining a model that is suitable for MPC is often a difficult task. The basic idea of this paper is to present an identification method for a piecewise affine (PWA) model based on a fuzzy clustering algorithm. First we introduce the PWA model. Next, we tackle the identification method. We treat the fuzzy clustering algorithm, deal with the projections of the fuzzy clusters into the input space of the PWA model and explain the estimation of the parameters of the PWA model by means of a modified least-squares method. Furthermore, we verify the usability of the proposed identification approach on a hybrid nonlinear batch reactor example. The result suggest that the batch reactor can be efficiently identified and thus formulated as a PWA model, which can eventually be used for model predictive control purposes.
Abstract: The back propagation algorithm calculates the weight
changes of artificial neural networks, and a common approach is to
use a training algorithm consisting of a learning rate and a
momentum factor. The major drawbacks of above learning algorithm
are the problems of local minima and slow convergence speeds. The
addition of an extra term, called a proportional factor reduces the
convergence of the back propagation algorithm. We have applied the
three term back propagation to multiplicative neural network
learning. The algorithm is tested on XOR and parity problem and
compared with the standard back propagation training algorithm.
Abstract: The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.
Abstract: Predicting short term wind speed is essential in order
to prevent systems in-action from the effects of strong winds. It also
helps in using wind energy as an alternative source of energy, mainly
for Electrical power generation. Wind speed prediction has
applications in Military and civilian fields for air traffic control,
rocket launch, ship navigation etc. The wind speed in near future
depends on the values of other meteorological variables, such as
atmospheric pressure, moisture content, humidity, rainfall etc. The
values of these parameters are obtained from a nearest weather
station and are used to train various forms of neural networks. The
trained model of neural networks is validated using a similar set of
data. The model is then used to predict the wind speed, using the
same meteorological information. This paper reports an Artificial
Neural Network model for short term wind speed prediction, which
uses back propagation algorithm.
Abstract: An optimal power flow (OPF) based on particle swarm
optimization (PSO) was developed with more realistic generator
security constraint using the capability curve instead of only Pmin/Pmax
and Qmin/Qmax. Neural network (NN) was used in designing digital
capability curve and the security check algorithm. The algorithm is
very simple and flexible especially for representing non linear
generation operation limit near steady state stability limit and under
excitation operation area. In effort to avoid local optimal power flow
solution, the particle swarm optimization was implemented with
enough widespread initial population. The objective function used in
the optimization process is electric production cost which is
dominated by fuel cost. The proposed method was implemented at
Java Bali 500 kV power systems contain of 7 generators and 20
buses. The simulation result shows that the combination of generator
power output resulted from the proposed method was more economic
compared with the result using conventional constraint but operated
at more marginal operating point.
Abstract: Inverse kinematics analysis plays an important role in developing a robot manipulator. But it is not too easy to derive the inverse kinematic equation of a robot manipulator especially robot manipulator which has numerous degree of freedom. This paper describes an application of Artificial Neural Network for modeling the inverse kinematics equation of a robot manipulator. In this case, the robot has three degree of freedoms and the robot was implemented for drilling a printed circuit board. The artificial neural network architecture used for modeling is a multilayer perceptron networks with steepest descent backpropagation training algorithm. The designed artificial neural network has 2 inputs, 2 outputs and varies in number of hidden layer. Experiments were done in variation of number of hidden layer and learning rate. Experimental results show that the best architecture of artificial neural network used for modeling inverse kinematics of is multilayer perceptron with 1 hidden layer and 38 neurons per hidden layer. This network resulted a RMSE value of 0.01474.
Abstract: Load forecasting has always been the essential part of
an efficient power system operation and planning. A novel approach
based on support vector machines is proposed in this paper for annual
power load forecasting. Different kernel functions are selected to
construct a combinatorial algorithm. The performance of the new
model is evaluated with a real-world dataset, and compared with two
neural networks and some traditional forecasting techniques. The
results show that the proposed method exhibits superior performance.
Abstract: Current image-based individual human recognition
methods, such as fingerprints, face, or iris biometric modalities
generally require a cooperative subject, views from certain aspects,
and physical contact or close proximity. These methods cannot
reliably recognize non-cooperating individuals at a distance in the
real world under changing environmental conditions. Gait, which
concerns recognizing individuals by the way they walk, is a relatively
new biometric without these disadvantages. The inherent gait
characteristic of an individual makes it irreplaceable and useful in
visual surveillance.
In this paper, an efficient gait recognition system for human
identification by extracting two features namely width vector of
the binary silhouette and the MPEG-7-based region-based shape
descriptors is proposed. In the proposed method, foreground objects
i.e., human and other moving objects are extracted by estimating
background information by a Gaussian Mixture Model (GMM) and
subsequently, median filtering operation is performed for removing
noises in the background subtracted image. A moving target classification
algorithm is used to separate human being (i.e., pedestrian)
from other foreground objects (viz., vehicles). Shape and boundary
information is used in the moving target classification algorithm.
Subsequently, width vector of the outer contour of binary silhouette
and the MPEG-7 Angular Radial Transform coefficients are taken as
the feature vector. Next, the Principal Component Analysis (PCA)
is applied to the selected feature vector to reduce its dimensionality.
These extracted feature vectors are used to train an Hidden Markov
Model (HMM) for identification of some individuals. The proposed
system is evaluated using some gait sequences and the experimental
results show the efficacy of the proposed algorithm.
Abstract: The present models and simulation algorithms of intracellular stochastic kinetics are usually based on the premise that diffusion is so fast that the concentrations of all the involved species are homogeneous in space. However, recents experimental measurements of intracellular diffusion constants indicate that the assumption of a homogeneous well-stirred cytosol is not necessarily valid even for small prokaryotic cells. In this work a mathematical treatment of diffusion that can be incorporated in a stochastic algorithm simulating the dynamics of a reaction-diffusion system is presented. The movement of a molecule A from a region i to a region j of the space is represented as a first order reaction Ai k- ! Aj , where the rate constant k depends on the diffusion coefficient. The diffusion coefficients are modeled as function of the local concentration of the solutes, their intrinsic viscosities, their frictional coefficients and the temperature of the system. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the intrinsic reaction kinetics and diffusion dynamics. To demonstrate the method the simulation results of the reaction-diffusion system of chaperoneassisted protein folding in cytoplasm are shown.
Abstract: The conjugate gradient optimization algorithm
usually used for nonlinear least squares is presented and is
combined with the modified back propagation algorithm yielding
a new fast training multilayer perceptron (MLP) algorithm
(CGFR/AG). The approaches presented in the paper consist of
three steps: (1) Modification on standard back propagation
algorithm by introducing gain variation term of the activation
function, (2) Calculating the gradient descent on error with
respect to the weights and gains values and (3) the determination
of the new search direction by exploiting the information
calculated by gradient descent in step (2) as well as the previous
search direction. The proposed method improved the training
efficiency of back propagation algorithm by adaptively modifying
the initial search direction. Performance of the proposed method
is demonstrated by comparing to the conjugate gradient algorithm
from neural network toolbox for the chosen benchmark. The
results show that the number of iterations required by the
proposed method to converge is less than 20% of what is required
by the standard conjugate gradient and neural network toolbox
algorithm.
Abstract: In this paper, we present an efficient numerical algorithm, namely block homotopy perturbation method, for solving fuzzy linear systems based on homotopy perturbation method. Some numerical examples are given to show the efficiency of the algorithm.
Abstract: This paper presents a novel two-phase hybrid optimization algorithm with hybrid genetic operators to solve the optimal control problem of a single stage hybrid manufacturing system. The proposed hybrid real coded genetic algorithm (HRCGA) is developed in such a way that a simple real coded GA acts as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method is next employed to do fine tuning. The hybrid genetic operators involved in the proposed algorithm improve both the quality of the solution and convergence speed. The phase–1 uses conventional real coded genetic algorithm (RCGA), while optimisation by direct search and systematic reduction of the size of search region is employed in the phase – 2. A typical numerical example of an optimal control problem with the number of jobs varying from 10 to 50 is included to illustrate the efficacy of the proposed algorithm. Several statistical analyses are done to compare the validity of the proposed algorithm with the conventional RCGA and PSO techniques. Hypothesis t – test and analysis of variance (ANOVA) test are also carried out to validate the effectiveness of the proposed algorithm. The results clearly demonstrate that the proposed algorithm not only improves the quality but also is more efficient in converging to the optimal value faster. They can outperform the conventional real coded GA (RCGA) and the efficient particle swarm optimisation (PSO) algorithm in quality of the optimal solution and also in terms of convergence to the actual optimum value.
Abstract: This paper evaluates performances of an adaptive noise
cancelling (ANC) based target detection algorithm on a set of real test
data supported by the Defense Evaluation Research Agency (DERA
UK) for multi-target wideband active sonar echolocation system. The
hybrid algorithm proposed is a combination of an adaptive ANC
neuro-fuzzy scheme in the first instance and followed by an iterative
optimum target motion estimation (TME) scheme. The neuro-fuzzy
scheme is based on the adaptive noise cancelling concept with the
core processor of ANFIS (adaptive neuro-fuzzy inference system) to
provide an effective fine tuned signal. The resultant output is then
sent as an input to the optimum TME scheme composed of twogauge
trimmed-mean (TM) levelization, discrete wavelet denoising
(WDeN), and optimal continuous wavelet transform (CWT) for
further denosing and targets identification. Its aim is to recover the
contact signals in an effective and efficient manner and then determine
the Doppler motion (radial range, velocity and acceleration) at very
low signal-to-noise ratio (SNR). Quantitative results have shown that
the hybrid algorithm have excellent performance in predicting targets-
Doppler motion within various target strength with the maximum
false detection of 1.5%.
Abstract: Fuzzy logic control (FLC) systems have been tested in
many technical and industrial applications as a useful modeling tool
that can handle the uncertainties and nonlinearities of modern control
systems. The main drawback of the FLC methodologies in the
industrial environment is challenging for selecting the number of
optimum tuning parameters.
In this paper, a method has been proposed for finding the optimum
membership functions of a fuzzy system using particle swarm
optimization (PSO) algorithm. A synthetic algorithm combined from
fuzzy logic control and PSO algorithm is used to design a controller
for a continuous stirred tank reactor (CSTR) with the aim of
achieving the accurate and acceptable desired results. To exhibit the
effectiveness of proposed algorithm, it is used to optimize the
Gaussian membership functions of the fuzzy model of a nonlinear
CSTR system as a case study. It is clearly proved that the optimized
membership functions (MFs) provided better performance than a
fuzzy model for the same system, when the MFs were heuristically
defined.