Abstract: The particular interests of this paper is to explore if the simple Genetic Algorithms (GA) starts with population of only two individuals and applying different crossover technique over these parents to produced 104 children, each one has different attributes inherited from their parents; is better than starting with population of 100 individuals; and using only one type crossover (order crossover OX). For this reason we implement GA with 52 different crossover techniques; each one produce two children; which means 104 different children will be produced and this may discover more search space, also we implement classic GA with order crossover and many experiments were done over 3 Travel Salesman Problem (TSP) to find out which method is better, and according to the results we can say that GA with Multi-crossovers is much better.
Abstract: In this article, we propose a methodology for the
characterization of the suspended matter along Algiers-s bay. An
approach by multi layers perceptron (MLP) with training by back
propagation of the gradient optimized by the algorithm of Levenberg
Marquardt (LM) is used. The accent was put on the choice of the
components of the base of training where a comparative study made
for four methods: Random and three alternatives of classification by
K-Means. The samples are taken from suspended matter image,
obtained by analytical model based on polynomial regression by
taking account of in situ measurements. The mask which selects the
zone of interest (water in our case) was carried out by using a multi
spectral classification by ISODATA algorithm. To improve the
result of classification, a cleaning of this mask was carried out using
the tools of mathematical morphology. The results of this study
presented in the forms of curves, tables and of images show the
founded good of our methodology.
Abstract: The electrical potentials generated during eye movements and blinks are one of the main sources of artifacts in Electroencephalogram (EEG) recording and can propagate much across the scalp, masking and distorting brain signals. In recent times, signal separation algorithms are used widely for removing artifacts from the observed EEG data. In this paper, a recently introduced signal separation algorithm Mutual Information based Least dependent Component Analysis (MILCA) is employed to separate ocular artifacts from EEG. The aim of MILCA is to minimize the Mutual Information (MI) between the independent components (estimated sources) under a pure rotation. Performance of this algorithm is compared with eleven popular algorithms (Infomax, Extended Infomax, Fast ICA, SOBI, TDSEP, JADE, OGWE, MS-ICA, SHIBBS, Kernel-ICA, and RADICAL) for the actual independence and uniqueness of the estimated source components obtained for different sets of EEG data with ocular artifacts by using a reliable MI Estimator. Results show that MILCA is best in separating the ocular artifacts and EEG and is recommended for further analysis.
Abstract: We present a hybrid architecture of recurrent neural
networks (RNNs) inspired by hidden Markov models (HMMs). We
train the hybrid architecture using genetic algorithms to learn and
represent dynamical systems. We train the hybrid architecture on a
set of deterministic finite-state automata strings and observe the
generalization performance of the hybrid architecture when presented
with a new set of strings which were not present in the training data
set. In this way, we show that the hybrid system of HMM and RNN
can learn and represent deterministic finite-state automata. We ran
experiments with different sets of population sizes in the genetic
algorithm; we also ran experiments to find out which weight
initializations were best for training the hybrid architecture. The
results show that the hybrid architecture of recurrent neural networks
inspired by hidden Markov models can train and represent dynamical
systems. The best training and generalization performance is
achieved when the hybrid architecture is initialized with random real
weight values of range -15 to 15.
Abstract: In this paper, we study the knapsack sharing problem, a variant of the well-known NP-Hard single knapsack problem. We investigate the use of a tree search for optimally solving the problem. The used method combines two complementary phases: a reduction interval search phase and a branch and bound procedure one. First, the reduction phase applies a polynomial reduction strategy; that is used for decomposing the problem into a series of knapsack problems. Second, the tree search procedure is applied in order to attain a set of optimal capacities characterizing the knapsack problems. Finally, the performance of the proposed optimal algorithm is evaluated on a set of instances of the literature and its runtime is compared to the best exact algorithm of the literature.
Abstract: In this paper the gradient based iterative algorithm is
presented to solve the linear matrix equation AXB +CXTD = E,
where X is unknown matrix, A,B,C,D,E are the given constant
matrices. It is proved that if the equation has a solution, then the
unique minimum norm solution can be obtained by choosing a special
kind of initial matrices. Two numerical examples show that the
introduced iterative algorithm is quite efficient.
Abstract: Clustering unstructured text documents is an
important issue in data mining community and has a number of
applications such as document archive filtering, document
organization and topic detection and subject tracing. In the real
world, some of the already clustered documents may not be of
importance while new documents of more significance may evolve.
Most of the work done so far in clustering unstructured text
documents overlooks this aspect of clustering. This paper, addresses
this issue by using the Fading Function. The unstructured text
documents are clustered. And for each cluster a statistics structure
called Cluster Profile (CP) is implemented. The cluster profile
incorporates the Fading Function. This Fading Function keeps an
account of the time-dependent importance of the cluster. The work
proposes a novel algorithm Clustering n-ary Merge Algorithm
(CnMA) for unstructured text documents, that uses Cluster Profile
and Fading Function. Experimental results illustrating the
effectiveness of the proposed technique are also included.
Abstract: Moral decisions are considered as an intuitive process,
while conscious reasoning is mostly used only to justify those
intuitions. This problem is described in few different dual-process
theories of mind, that are being developed e.g. by Frederick and
Kahneman, Stanovich and Evans. Those theories recently evolved
into tri-process theories with a proposed process that makes ultimate
decision or allows to paraformal processing with focal bias..
Presented experiment compares the decision patterns to the
implications of those models.
In presented study participants (n=179) considered different
aspects of trolley dilemma or its footbridge version and decided after
that.
Results show that in the control group 70% of people decided to
use the lever to change tracks for the running trolley, and 20% chose
to push the fat man down the tracks. In contrast, after experimental
manipulation almost no one decided to act. Also the decision time
difference between dilemmas disappeared after experimental
manipulation.
The result supports the idea of three co-working processes:
intuitive (TASS), paraformal (reflective mind) and algorithmic
process.
Abstract: Image registration plays an important role in the
diagnosis of dental pathologies such as dental caries, alveolar bone
loss and periapical lesions etc. This paper presents a new wavelet
based algorithm for registering noisy and poor contrast dental x-rays.
Proposed algorithm has two stages. First stage is a preprocessing
stage, removes the noise from the x-ray images. Gaussian filter has
been used. Second stage is a geometric transformation stage.
Proposed work uses two levels of affine transformation. Wavelet
coefficients are correlated instead of gray values. Algorithm has been
applied on number of pre and post RCT (Root canal treatment)
periapical radiographs. Root Mean Square Error (RMSE) and
Correlation coefficients (CC) are used for quantitative evaluation.
Proposed technique outperforms conventional Multiresolution
strategy based image registration technique and manual registration
technique.
Abstract: This paper presents a possibilistic (fuzzy) model in optimal siting and sizing of Distributed Generation (DG) for loss reduction and improve voltage profile in power distribution system. Multi-objective problem is developed in two phases. In the first one, the set of non-dominated planning solutions is obtained (with respect to the objective functions of fuzzy economic cost, and exposure) using genetic algorithm. In the second phase, one solution of the set of non-dominated solutions is selected as optimal solution, using a suitable max-min approach. This method can be determined operation-mode (PV or PQ) of DG. Because of considering load uncertainty in this paper, it can be obtained realistic results. The whole process of this method has been implemented in the MATLAB7 environment with technical and economic consideration for loss reduction and voltage profile improvement. Through numerical example the validity of the proposed method is verified.
Abstract: A lot of matching algorithms with different characteristics have been introduced in recent years. For real time systems these algorithms are usually based on minutiae features. In this paper we introduce a novel approach for feature extraction in which the extracted features are independent of shift and rotation of the fingerprint and at the meantime the matching operation is performed much more easily and with higher speed and accuracy. In this new approach first for any fingerprint a reference point and a reference orientation is determined and then based on this information features are converted into polar coordinates. Due to high speed and accuracy of this approach and small volume of extracted features and easily execution of matching operation this approach is the most appropriate for real time applications.
Abstract: This paper evaluates the performance of a novel
algorithm for tracking of a mobile node, interms of execution time
and root mean square error (RMSE). Particle Filter algorithm is used
to track the mobile node, however a new technique in particle filter
algorithm is also proposed to reduce the execution time. The
stationary points were calculated through trilateration and finally by
averaging the number of points collected for a specific time, whereas
tracking is done through trilateration as well as particle filter
algorithm. Wi-Fi signal is used to get initial guess of the position of
mobile node in x-y coordinates system. Commercially available
software “Wireless Mon" was used to read the WiFi signal strength
from the WiFi card. Visual Cµ version 6 was used to interact with
this software to read only the required data from the log-file
generated by “Wireless Mon" software. Results are evaluated through
mathematical modeling and MATLAB simulation.
Abstract: We have applied new accelerated algorithm for linear
discriminate analysis (LDA) in face recognition with support vector
machine. The new algorithm has the advantage of optimal selection
of the step size. The gradient descent method and new algorithm has
been implemented in software and evaluated on the Yale face
database B. The eigenfaces of these approaches have been used to
training a KNN. Recognition rate with new algorithm is compared
with gradient.
Abstract: Locality Sensitive Hashing (LSH) is one of the most
promising techniques for solving nearest neighbour search problem in
high dimensional space. Euclidean LSH is the most popular variation
of LSH that has been successfully applied in many multimedia
applications. However, the Euclidean LSH presents limitations that
affect structure and query performances. The main limitation of the
Euclidean LSH is the large memory consumption. In order to achieve
a good accuracy, a large number of hash tables is required. In this
paper, we propose a new hashing algorithm to overcome the storage
space problem and improve query time, while keeping a good
accuracy as similar to that achieved by the original Euclidean LSH.
The Experimental results on a real large-scale dataset show that the
proposed approach achieves good performances and consumes less
memory than the Euclidean LSH.
Abstract: This paper proposes an efficient finite precision block floating point (BFP) treatment to the fixed coefficient finite impulse response (FIR) digital filter. The treatment includes effective implementation of all the three forms of the conventional FIR filters, namely, direct form, cascaded and par- allel, and a roundoff error analysis of them in the BFP format. An effective block formatting algorithm together with an adaptive scaling factor is pro- posed to make the realizations more simple from hardware view point. To this end, a generic relation between the tap weight vector length and the input block length is deduced. The implementation scheme also emphasises on a simple block exponent update technique to prevent overflow even during the block to block transition phase. The roundoff noise is also investigated along the analogous lines, taking into consideration these implementational issues. The simulation results show that the BFP roundoff errors depend on the sig- nal level almost in the same way as floating point roundoff noise, resulting in approximately constant signal to noise ratio over a relatively large dynamic range.
Abstract: Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body research using Active Shape Models, such as human detection, primarily take the form of silhouette of human body. This technique is not able to estimate accurately for human pose to concern two arms and legs, as the silhouette of human body represents the shape as out of round. To solve this problem, we applied the human body model as stick-figure, “skeleton". The skeleton model of human body can give consideration to various shapes of human pose. To obtain effective estimation result, we applied background subtraction and deformed matching algorithm of primary Active Shape Models in the fitting process. The images which were used to make the model were 600 human bodies, and the model has 17 landmark points which indicate body junction and key features of human pose. The maximum iteration for the fitting process was 30 times and the execution time was less than .03 sec.
Abstract: Fuzzy Cognitive Maps (FCMs) is a causal graph, which shows the relations between essential components in complex systems. Experts who are familiar with the system components and their relations can generate a related FCM. There is a big gap when human experts cannot produce FCM or even there is no expert to produce the related FCM. Therefore, a new mechanism must be used to bridge this gap. In this paper, a novel learning method is proposed to construct causal graph based on historical data and by using metaheuristic such Tabu Search (TS). The efficiency of the proposed method is shown via comparison of its results of some numerical examples with those of some other methods.
Abstract: With the exponentially increasing demand for
wireless communications the capacity of current cellular systems will
soon become incapable of handling the growing traffic. Since radio
frequencies are diminishing natural resources, there seems to be a
fundamental barrier to further capacity increase. The solution can be
found in smart antenna systems.
Smart or adaptive antenna arrays consist of an array of antenna
elements with signal processing capability, that optimize the
radiation and reception of a desired signal, dynamically. Smart
antennas can place nulls in the direction of interferers via adaptive
updating of weights linked to each antenna element. They thus cancel
out most of the co-channel interference resulting in better quality of
reception and lower dropped calls. Smart antennas can also track the
user within a cell via direction of arrival algorithms. This implies that
they are more advantageous than other antenna systems. This paper
focuses on few issues about the smart antennas in mobile radio
networks.
Abstract: Traffic incident has bad effect on all parts of society
so controlling road networks with enough traffic devices could help
to decrease number of accidents, so using the best method for
optimum site selection of these devices could help to implement good
monitoring system. This paper has considered here important criteria
for optimum site selection of traffic camera based on aggregation
methods such as Bagging and Dempster-Shafer concepts. In the first
step, important criteria such as annual traffic flow, distance from
critical places such as parks that need more traffic controlling were
identified for selection of important road links for traffic camera
installation, Then classification methods such as Artificial neural
network and Decision tree algorithms were employed for
classification of road links based on their importance for camera
installation. Then for improving the result of classifiers aggregation
methods such as Bagging and Dempster-Shafer theories were used.
Abstract: Adapting various sensor devices to communicate
within sensor networks empowers us by providing range of
possibilities. The sensors in sensor networks need to know their
measurable belief of trust for efficient and safe communication. In this
paper, we suggested a trust model using fuzzy logic in sensor network.
Trust is an aggregation of consensus given a set of past interaction
among sensors. We applied our suggested model to sensor networks in
order to show how trust mechanisms are involved in communicating
algorithm to choose the proper path from source to destination.