Abstract: Face recognition is a technique to automatically
identify or verify individuals. It receives great attention in
identification, authentication, security and many more applications.
Diverse methods had been proposed for this purpose and also a lot of
comparative studies were performed. However, researchers could not
reach unified conclusion. In this paper, we are reporting an extensive
quantitative accuracy analysis of four most widely used face
recognition algorithms: Principal Component Analysis (PCA),
Independent Component Analysis (ICA), Linear Discriminant
Analysis (LDA) and Support Vector Machine (SVM) using AT&T,
Sheffield and Bangladeshi people face databases under diverse
situations such as illumination, alignment and pose variations.
Abstract: In this paper we investigated a number of the Internet
congestion control algorithms that has been developed in the last few
years. It was obviously found that many of these algorithms were
designed to deal with the Internet traffic merely as a train of
consequent packets. Other few algorithms were specifically tailored
to handle the Internet congestion caused by running media traffic that
represents audiovisual content. This later set of algorithms is
considered to be aware of the nature of this media content. In this
context we briefly explained a number of congestion control
algorithms and hence categorized them into the two following
categories: i) Media congestion control algorithms. ii) Common
congestion control algorithms. We hereby recommend the usage of
the media congestion control algorithms for the reason of being
media content-aware rather than the other common type of
algorithms that blindly manipulates such traffic. We showed that the
spread of such media content-aware algorithms over Internet will
lead to better congestion control status in the coming years. This is
due to the observed emergence of the era of digital convergence
where the media traffic type will form the majority of the Internet
traffic.
Abstract: In this paper, a reliable cooperative multipath routing
algorithm is proposed for data forwarding in wireless sensor networks
(WSNs). In this algorithm, data packets are forwarded towards the
base station (BS) through a number of paths, using a set of relay
nodes. In addition, the Rayleigh fading model is used to calculate
the evaluation metric of links. Here, the quality of reliability is
guaranteed by selecting optimal relay set with which the probability
of correct packet reception at the BS will exceed a predefined
threshold. Therefore, the proposed scheme ensures reliable packet
transmission to the BS. Furthermore, in the proposed algorithm,
energy efficiency is achieved by energy balancing (i.e. minimizing
the energy consumption of the bottleneck node of the routing path)
at the same time. This work also demonstrates that the proposed
algorithm outperforms existing algorithms in extending longevity of
the network, with respect to the quality of reliability. Given this, the
obtained results make possible reliable path selection with minimum
energy consumption in real time.
Abstract: This paper aims to develop a NOx emission model of
an acid gas incinerator using Nelder-Mead least squares support
vector regression (LS-SVR). Malaysia DOE is actively imposing the
Clean Air Regulation to mandate the installation of analytical
instrumentation known as Continuous Emission Monitoring System
(CEMS) to report emission level online to DOE . As a hardware
based analyzer, CEMS is expensive, maintenance intensive and often
unreliable. Therefore, software predictive technique is often
preferred and considered as a feasible alternative to replace the
CEMS for regulatory compliance. The LS-SVR model is built based
on the emissions from an acid gas incinerator that operates in a LNG
Complex. Simulated Annealing (SA) is first used to determine the
initial hyperparameters which are then further optimized based on the
performance of the model using Nelder-Mead simplex algorithm.
The LS-SVR model is shown to outperform a benchmark model
based on backpropagation neural networks (BPNN) in both training
and testing data.
Abstract: With the fast evolution of digital data exchange, security information becomes much important in data storage and transmission. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. In this paper, we analyze the Advanced Encryption Standard (AES), and we add a key stream generator (A5/1, W7) to AES to ensure improving the encryption performance; mainly for images characterised by reduced entropy. The implementation of both techniques has been realized for experimental purposes. Detailed results in terms of security analysis and implementation are given. Comparative study with traditional encryption algorithms is shown the superiority of the modified algorithm.
Abstract: We board the problem of creating a seismic alert
system, based upon artificial neural networks, trained by using the
well-known back-propagation and genetic algorithms, in order to emit
the alarm for the population located into a specific city, about an
eminent earthquake greater than 4.5 Richter degrees, and avoiding
disasters and human loses. In lieu of using the propagation wave, we
employed the magnitude of the earthquake, to establish a correlation
between the recorded magnitudes from a controlled area and the city,
where we want to emit the alarm. To measure the accuracy of the
posed method, we use a database provided by CIRES, which contains
the records of 2500 quakes incoming from the State of Guerrero
and Mexico City. Particularly, we performed the proposed method to
generate an issue warning in Mexico City, employing the magnitudes
recorded in the State of Guerrero.
Abstract: Fast depth estimation from binocular vision is often
desired for autonomous vehicles, but, most algorithms could not easily
be put into practice because of the much time cost. We present an
image-processing technique that can fast estimate depth image from
binocular vision images. By finding out the lines which present the
best matched area in the disparity space image, the depth can be
estimated. When detecting these lines, an edge-emphasizing filter is
used. The final depth estimation will be presented after the smooth
filter. Our method is a compromise between local methods and global
optimization.
Abstract: For most image fusion algorithms separate
relationship by pixels in the image and treat them more or less
independently. In addition, they have to be adjusted different
parameters in different time or weather. In this paper, we propose a
region–based image fusion which combines aspects of feature and
pixel-level fusion method to replace only by pixel. The basic idea is
to segment far infrared image only and to add information of each
region from segmented image to visual image respectively. Then we
determine different fused parameters according different region. At
last, we adopt artificial neural network to deal with the problems of
different time or weather, because the relationship between fused
parameters and image features are nonlinear. It render the fused
parameters can be produce automatically according different states.
The experimental results present the method we proposed indeed
have good adaptive capacity with automatic determined fused
parameters. And the architecture can be used for lots of applications.
Abstract: An array antenna system with innovative signal
processing can improve the resolution of a source direction of arrival
(DoA) estimation. High resolution techniques take the advantage of
array antenna structures to better process the incoming waves. They
also have the capability to identify the direction of multiple targets.
This paper investigates performance of the DOA estimation
algorithm namely; Capon and MUSIC on the uniform linear array
(ULA). The simulation results show that in Capon and MUSIC
algorithm the resolution of the DOA techniques improves as number
of snapshots, number of array elements, signal-to-noise ratio and
separation angle between the two sources θ increases.
Abstract: Timetabling problems are often hard and timeconsuming
to solve. Most of the methods of solving them concern
only one problem instance or class. This paper describes a universal
method for solving large, highly constrained timetabling problems
from different domains. The solution is based on evolutionary
algorithm-s framework and operates on two levels – first-level
evolutionary algorithm tries to find a solution basing on given set of
operating parameters, second-level algorithm is used to establish
those parameters. Tabu search is employed to speed up the solution
finding process on first level. The method has been used to solve
three different timetabling problems with promising results.
Abstract: This paper deals with the tuning of parameters for Automatic Generation Control (AGC). A two area interconnected hydrothermal system with PI controller is considered. Genetic Algorithm (GA) and Particle Swarm optimization (PSO) algorithms have been applied to optimize the controller parameters. Two objective functions namely Integral Square Error (ISE) and Integral of Time-multiplied Absolute value of the Error (ITAE) are considered for optimization. The effectiveness of an objective function is considered based on the variation in tie line power and change in frequency in both the areas. MATLAB/SIMULINK was used as a simulation tool. Simulation results reveal that ITAE is a better objective function than ISE. Performances of optimization algorithms are also compared and it was found that genetic algorithm gives better results than particle swarm optimization algorithm for the problems of AGC.
Abstract: Obtaining labeled data in supervised learning is often
difficult and expensive, and thus the trained learning algorithm tends
to be overfitting due to small number of training data. As a result,
some researchers have focused on using unlabeled data which may
not necessary to follow the same generative distribution as the labeled
data to construct a high-level feature for improving performance on
supervised learning tasks. In this paper, we investigate the impact of
the relationship between unlabeled and labeled data for classification
performance. Specifically, we will apply difference unlabeled data
which have different degrees of relation to the labeled data for
handwritten digit classification task based on MNIST dataset. Our
experimental results show that the higher the degree of relation
between unlabeled and labeled data, the better the classification
performance. Although the unlabeled data that is completely from
different generative distribution to the labeled data provides the lowest
classification performance, we still achieve high classification performance.
This leads to expanding the applicability of the supervised
learning algorithms using unsupervised learning.
Abstract: Higher-order Statistics (HOS), also known as
cumulants, cross moments and their frequency domain counterparts,
known as poly spectra have emerged as a powerful signal processing
tool for the synthesis and analysis of signals and systems. Algorithms
used for the computation of cross moments are computationally
intensive and require high computational speed for real-time
applications. For efficiency and high speed, it is often advantageous
to realize computation intensive algorithms in hardware. A promising
solution that combines high flexibility together with the speed of a
traditional hardware is Field Programmable Gate Array (FPGA). In
this paper, we present FPGA-based parallel architecture for the
computation of third-order cross moments. The proposed design is
coded in Very High Speed Integrated Circuit (VHSIC) Hardware
Description Language (VHDL) and functionally verified by
implementing it on Xilinx Spartan-3 XC3S2000FG900-4 FPGA.
Implementation results are presented and it shows that the proposed
design can operate at a maximum frequency of 86.618 MHz.
Abstract: This research paper presents some methods to assess the performance of Wigner Ville Distribution for Time-Frequency representation of non-stationary signals, in comparison with the other representations like STFT, Spectrogram etc. The simultaneous timefrequency resolution of WVD is one of the important properties which makes it preferable for analysis and detection of linear FM and transient signals. There are two algorithms proposed here to assess the resolution and to compare the performance of signal detection. First method is based on the measurement of area under timefrequency plot; in case of a linear FM signal analysis. A second method is based on the instantaneous power calculation and is used in case of transient, non-stationary signals. The implementation is explained briefly for both methods with suitable diagrams. The accuracy of the measurements is validated to show the better performance of WVD representation in comparison with STFT and Spectrograms.
Abstract: Term Extraction, a key data preparation step in Text
Mining, extracts the terms, i.e. relevant collocation of words,
attached to specific concepts (e.g. genetic-algorithms and decisiontrees
are terms associated to the concept “Machine Learning" ). In
this paper, the task of extracting interesting collocations is achieved
through a supervised learning algorithm, exploiting a few
collocations manually labelled as interesting/not interesting. From
these examples, the ROGER algorithm learns a numerical function,
inducing some ranking on the collocations. This ranking is optimized
using genetic algorithms, maximizing the trade-off between the false
positive and true positive rates (Area Under the ROC curve). This
approach uses a particular representation for the word collocations,
namely the vector of values corresponding to the standard statistical
interestingness measures attached to this collocation. As this
representation is general (over corpora and natural languages),
generality tests were performed by experimenting the ranking
function learned from an English corpus in Biology, onto a French
corpus of Curriculum Vitae, and vice versa, showing a good
robustness of the approaches compared to the state-of-the-art Support
Vector Machine (SVM).
Abstract: The Aggregate Production Plan (APP) is a schedule of
the organization-s overall operations over a planning horizon to
satisfy demand while minimizing costs. It is the baseline for any
further planning and formulating the master production scheduling,
resources, capacity and raw material planning. This paper presents a
methodology to model the Aggregate Production Planning problem,
which is combinatorial in nature, when optimized with Genetic
Algorithms. This is done considering a multitude of constraints of
contradictory nature and the optimization criterion – overall cost,
made up of costs with production, work force, inventory, and
subcontracting. A case study of substantial size, used to develop the
model, is presented, along with the genetic operators.
Abstract: The present work encounters the solution of the defect identification problem with the use of an evolutionary algorithm combined with a simplex method. In more details, a Matlab implementation of Genetic Algorithms is combined with a Simplex method in order to lead to the successful identification of the defect. The influence of the location and the orientation of the depressed ellipsoidal flaw was investigated as well as the use of different amount of static data in the cost function. The results were evaluated according to the ability of the simplex method to locate the global optimum in each test case. In this way, a clear impression regarding the performance of the novel combination of the optimization algorithms, and the influence of the geometrical parameters of the flaw in defect identification problems was obtained.
Abstract: The objective of this research is to calculate the
optimal inventory lot-sizing for each supplier and minimize the total
inventory cost which includes joint purchase cost of the products,
transaction cost for the suppliers, and holding cost for remaining
inventory. Genetic algorithms (GAs) are applied to the multi-product
and multi-period inventory lot-sizing problems with supplier
selection under storage space. Also a maximum storage space for the
decision maker in each period is considered. The decision maker
needs to determine what products to order in what quantities with
which suppliers in which periods. It is assumed that demand of
multiple products is known over a planning horizon. The problem is
formulated as a mixed integer programming and is solved with the
GAs. The detailed computation results are presented.
Abstract: Truss optimization problem has been vastly studied
during the past 30 years and many different methods have been
proposed for this problem. Even though most of these methods
assume that the design variables are continuously valued, in reality,
the design variables of optimization problems such as cross-sectional
areas are discretely valued. In this paper, an improved hill climbing
and an improved simulated annealing algorithm have been proposed
to solve the truss optimization problem with discrete values for crosssectional
areas. Obtained results have been compared to other
methods in the literature and the comparison represents that the
proposed methods can be used more efficiently than other proposed
methods
Abstract: Wavelet transform or wavelet analysis is a recently
developed mathematical tool in applied mathematics. In numerical
analysis, wavelets also serve as a Galerkin basis to solve partial
differential equations. Haar transform or Haar wavelet transform has
been used as a simplest and earliest example for orthonormal wavelet
transform. Since its popularity in wavelet analysis, there are several
definitions and various generalizations or algorithms for calculating
Haar transform. Fast Haar transform, FHT, is one of the algorithms
which can reduce the tedious calculation works in Haar transform. In
this paper, we present a modified fast and exact algorithm for FHT,
namely Modified Fast Haar Transform, MFHT. The algorithm or
procedure proposed allows certain calculation in the process
decomposition be ignored without affecting the results.