Abstract: This paper presents a new method to design nonlinear
feedback linearization controller for PEMFCs (Polymer Electrolyte
Membrane Fuel Cells). A nonlinear controller is designed based on
nonlinear model to prolong the stack life of PEMFCs. Since it is
known that large deviations between hydrogen and oxygen partial
pressures can cause severe membrane damage in the fuel cell,
feedback linearization is applied to the PEMFC system so that the
deviation can be kept as small as possible during disturbances or load
variations. To obtain an accurate feedback linearization controller,
tuning the linear parameters are always important. So in proposed
study NSGA (Non-Dominated Sorting Genetic Algorithm)-II method
was used to tune the designed controller in aim to decrease the
controller tracking error. The simulation result showed that the
proposed method tuned the controller efficiently.
Abstract: WiMAX is a telecommunications technology and it is
specified by the Institute of Electrical and Electronics Engineers Inc.,
as the IEEE 802.16 standard. The goal of this technology is to
provide a wireless data over long distances in a variety of ways. IEEE
802.16 is a recent standard for mobile communication. In this paper,
we provide an overview of various key management algorithms to
provide security for WiMAX.
Abstract: The aim of the paper was to elaborate a novel calculator BasWilCalc, that allows to estimate the actual amount of biomass on the basket willow plantations. The proposed method is based on the results of field experiment conducted during years 2011-2013 on basket willow plantation in the south-western part of Poland. As input data the results of destructive measurements of the diameter, length and weight of willow stems and non-destructive biometric measurements of diameter in the middle of stems and their length during the growing season performed at weekly intervals were used. Performed analysis enabled to develop the algorithm which, due to the fact that energy plantations are of known and constant planting structure, allows to estimate the actual amount of willow basket biomass on the plantation with a given probability and accuracy specified by the model, based on the number of stems measured and the age of the plantation.
Abstract: The success of any retail business is predisposed by its
swift response and its knack in understanding the constraints and the
requirements of customers. In this paper a conceptual design model
of an automated customer-friendly supermarket has been proposed.
In this model a 10-sided, space benefited, regular polygon shaped
gravity shelves have been designed for goods storage and effective
customer-specific algorithms have been built-in for quick automatic
delivery of the randomly listed goods. The algorithm is developed
with two main objectives, viz., delivery time and priority. For
meeting these objectives the randomly listed items are reorganized
according to the critical-path of the robotic arm specific to the
identified shop and its layout and the items are categorized according
to the demand, shape, size, similarity and nature of the product for an
efficient pick-up, packing and delivery process. We conjectured that
the proposed automated supermarket model reduces business
operating costs with much customer satisfaction warranting a winwin
situation.
Abstract: Economic Dispatch (ED) is one of the most
challenging problems of power system since it is difficult to determine
the optimum generation scheduling to meet the particular load demand
with the minimum fuel costs while all constraints are satisfied. The
objective of the Economic Dispatch Problems (EDPs) of electric
power generation is to schedule the committed generating units
outputs so as to meet the required load demand at minimum operating
cost while satisfying all units and system equality and inequality
constraints. In this paper, an efficient and practical steady-state genetic
algorithm (SSGAs) has been proposed for solving the economic
dispatch problem. The objective is to minimize the total generation
fuel cost and keep the power flows within the security limits. To
achieve that, the present work is developed to determine the optimal
location and size of capacitors in transmission power system where,
the Participation Factor Algorithm and the Steady State Genetic
Algorithm are proposed to select the best locations for the capacitors
and determine the optimal size for them.
Abstract: The assessment of the risk posed by a borrower to a
lender is one of the common problems that financial institutions have
to deal with. Consumers vying for a mortgage are generally
compared to each other by the use of a number called the Credit
Score, which is generated by applying a mathematical algorithm to
information in the applicant’s credit report. The higher the credit
score, the lower the risk posed by the candidate, and the better he is
to be taken on by the lender. The objective of the present work is to
use fuzzy logic and linguistic rules to create a model that generates
Credit Scores.
Abstract: This paper presents general results on the Java source
code snippet detection problem. We propose the tool which uses
graph and subgraph isomorphism detection. A number of solutions
for all of these tasks have been proposed in the literature. However,
although that all these solutions are really fast, they compare just the
constant static trees. Our solution offers to enter an input sample
dynamically with the Scripthon language while preserving an
acceptable speed. We used several optimizations to achieve very low
number of comparisons during the matching algorithm.
Abstract: This paper aims at introducing finite automata theory,
the different ways to describe regular languages and create a program
to implement the subset construction algorithms to convert
nondeterministic finite automata (NFA) to deterministic finite
automata (DFA). This program is written in c++ programming
language. The program reads FA 5tuples from text file and then
classifies it into either DFA or NFA. For DFA, the program will read
the string w and decide whether it is acceptable or not. If accepted, the
program will save the tracking path and point it out. On the other hand,
when the automation is NFA, the program will change the Automation
to DFA so that it is easy to track and it can decide whether the w exists
in the regular language or not.
Abstract: This paper aims at finding a suitable neural network
for monitoring congestion level in electrical power systems. In this
paper, the input data has been framed properly to meet the target
objective through supervised learning mechanism by defining normal
and abnormal operating conditions for the system under study. The
congestion level, expressed as line congestion index (LCI), is
evaluated for each operating condition and is presented to the NN
along with the bus voltages to represent the input and target data.
Once, the training goes successful, the NN learns how to deal with a
set of newly presented data through validation and testing
mechanism. The crux of the results presented in this paper rests on
performance comparison of a multi-layered feed forward neural
network with eleven types of back propagation techniques so as to
evolve the best training criteria. The proposed methodology has been
tested on the standard IEEE-14 bus test system with the support of
MATLAB based NN toolbox. The results presented in this paper
signify that the Levenberg-Marquardt backpropagation algorithm
gives best training performance of all the eleven cases considered in
this paper, thus validating the proposed methodology.
Abstract: The system is designed to show images which are
related to the query image. Extracting color, texture, and shape
features from an image plays a vital role in content-based image
retrieval (CBIR). Initially RGB image is converted into HSV color
space due to its perceptual uniformity. From the HSV image, Color
features are extracted using block color histogram, texture features
using Haar transform and shape feature using Fuzzy C-means
Algorithm. Then, the characteristics of the global and local color
histogram, texture features through co-occurrence matrix and Haar
wavelet transform and shape are compared and analyzed for CBIR.
Finally, the best method of each feature is fused during similarity
measure to improve image retrieval effectiveness and accuracy.
Abstract: This research proposes a novel reconstruction protocol
for restoring missing surfaces and low-quality edges and shapes in
photos of artifacts at historical sites. The protocol starts with the
extraction of a cloud of points. This extraction process is based on
four subordinate algorithms, which differ in the robustness and
amount of resultant. Moreover, they use different -but
complementary- accuracy to some related features and to the way
they build a quality mesh. The performance of our proposed protocol
is compared with other state-of-the-art algorithms and toolkits. The
statistical analysis shows that our algorithm significantly outperforms
its rivals in the resultant quality of its object files used to reconstruct
the desired model.
Abstract: In this study, a comparative analysis of the approaches
associated with the use of neural network algorithms for effective
solution of a complex inverse problem – the problem of identifying
and determining the individual concentrations of inorganic salts in
multicomponent aqueous solutions by the spectra of Raman
scattering of light – is performed. It is shown that application of
artificial neural networks provides the average accuracy of
determination of concentration of each salt no worse than 0.025 M.
The results of comparative analysis of input data compression
methods are presented. It is demonstrated that use of uniform
aggregation of input features allows decreasing the error of
determination of individual concentrations of components by 16-18%
on the average.
Abstract: Frequent pattern mining is the process of finding a
pattern (a set of items, subsequences, substructures, etc.) that occurs
frequently in a data set. It was proposed in the context of frequent
itemsets and association rule mining. Frequent pattern mining is used
to find inherent regularities in data. What products were often
purchased together? Its applications include basket data analysis,
cross-marketing, catalog design, sale campaign analysis, Web log
(click stream) analysis, and DNA sequence analysis. However, one of
the bottlenecks of frequent itemset mining is that as the data increase
the amount of time and resources required to mining the data
increases at an exponential rate. In this investigation a new algorithm
is proposed which can be uses as a pre-processor for frequent itemset
mining. FASTER (FeAture SelecTion using Entropy and Rough sets)
is a hybrid pre-processor algorithm which utilizes entropy and roughsets
to carry out record reduction and feature (attribute) selection
respectively. FASTER for frequent itemset mining can produce a
speed up of 3.1 times when compared to original algorithm while
maintaining an accuracy of 71%.
Abstract: This paper presents a new meta-heuristic bio-inspired
optimization algorithm which is called Cuttlefish Algorithm (CFA).
The algorithm mimics the mechanism of color changing behavior of
the cuttlefish to solve numerical global optimization problems. The
colors and patterns of the cuttlefish are produced by reflected light
from three different layers of cells. The proposed algorithm considers
mainly two processes: reflection and visibility. Reflection process
simulates light reflection mechanism used by these layers, while
visibility process simulates visibility of matching patterns of the
cuttlefish. To show the effectiveness of the algorithm, it is tested with
some other popular bio-inspired optimization algorithms such as
Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and
Bees Algorithm (BA) that have been previously proposed in the
literature. Simulations and obtained results indicate that the proposed
CFA is superior when compared with these algorithms.
Abstract: This paper presents the performance state analysis of
Self-Excited Induction Generator (SEIG) using Artificial Bee Colony
(ABC) optimization technique. The total admittance of the induction
machine is minimized to calculate the frequency and magnetizing
reactance corresponding to any rotor speed, load impedance and
excitation capacitance. The performance of SEIG is calculated using
the optimized parameter found. The results obtained by ABC
algorithm are compared with results from numerical method. The
results obtained coincide with the numerical method results. This
technique proves to be efficient in solving nonlinear constrained
optimization problems and analyzing the performance of SEIG.
Abstract: In this paper the issue of dimensionality reduction is
investigated in finger vein recognition systems using kernel Principal
Component Analysis (KPCA). One aspect of KPCA is to find the
most appropriate kernel function on finger vein recognition as there
are several kernel functions which can be used within PCA-based
algorithms. In this paper, however, another side of PCA-based
algorithms -particularly KPCA- is investigated. The aspect of
dimension of feature vector in PCA-based algorithms is of
importance especially when it comes to the real-world applications
and usage of such algorithms. It means that a fixed dimension of
feature vector has to be set to reduce the dimension of the input and
output data and extract the features from them. Then a classifier is
performed to classify the data and make the final decision. We
analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in
this paper and investigate the optimal feature extraction dimension in
finger vein recognition using KPCA.
Abstract: The emergence of the Semantic Web technology
increases day by day due to the rapid growth of multiple web pages.
Many standard formats are available to store the semantic web data.
The most popular format is the Resource Description Framework
(RDF). Querying large RDF graphs becomes a tedious procedure
with a vast increase in the amount of data. The problem of query
optimization becomes an issue in querying large RDF graphs.
Choosing the best query plan reduces the amount of query execution
time. To address this problem, nature inspired algorithms can be used
as an alternative to the traditional query optimization techniques. In
this research, the optimal query plan is generated by the proposed
SAPSO algorithm which is a hybrid of Simulated Annealing (SA)
and Particle Swarm Optimization (PSO) algorithms. The proposed
SAPSO algorithm has the ability to find the local optimistic result
and it avoids the problem of local minimum. Experiments were
performed on different datasets by changing the number of predicates
and the amount of data. The proposed algorithm gives improved
results compared to existing algorithms in terms of query execution
time.
Abstract: ‘Steganalysis’ is one of the challenging and attractive interests for the researchers with the development of information hiding techniques. It is the procedure to detect the hidden information from the stego created by known steganographic algorithm. In this paper, a novel feature based image steganalysis technique is proposed. Various statistical moments have been used along with some similarity metric. The proposed steganalysis technique has been designed based on transformation in four wavelet domains, which include Haar, Daubechies, Symlets and Biorthogonal. Each domain is being subjected to various classifiers, namely K-nearest-neighbor, K* Classifier, Locally weighted learning, Naive Bayes classifier, Neural networks, Decision trees and Support vector machines. The experiments are performed on a large set of pictures which are available freely in image database. The system also predicts the different message length definitions.
Abstract: Recently GPS data is used in a lot of studies to
automatically reconstruct travel patterns for trip survey. The aim is to
minimize the use of questionnaire surveys and travel diaries so as to
reduce their negative effects. In this paper data acquired from GPS and
accelerometer embedded in smart phones is utilized to predict the
mode of transportation used by the phone carrier. For prediction,
Support Vector Machine (SVM) and Adaptive boosting (AdaBoost)
are employed. Moreover a unique method to improve the prediction
results from these algorithms is also proposed. Results suggest that the
prediction accuracy of AdaBoost after improvement is relatively better
than the rest.
Abstract: Artificial Neural Networks (ANN) trained using backpropagation
(BP) algorithm are commonly used for modeling
material behavior associated with non-linear, complex or unknown
interactions among the material constituents. Despite multidisciplinary
applications of back-propagation neural networks
(BPNN), the BP algorithm possesses the inherent drawback of
getting trapped in local minima and slowly converging to a global
optimum. The paper present a hybrid artificial neural networks and
genetic algorithm approach for modeling slump of ready mix
concrete based on its design mix constituents. Genetic algorithms
(GA) global search is employed for evolving the initial weights and
biases for training of neural networks, which are further fine tuned
using the BP algorithm. The study showed that, hybrid ANN-GA
model provided consistent predictions in comparison to commonly
used BPNN model. In comparison to BPNN model, the hybrid ANNGA
model was able to reach the desired performance goal quickly.
Apart from the modeling slump of ready mix concrete, the synaptic
weights of neural networks were harnessed for analyzing the relative
importance of concrete design mix constituents on the slump value.
The sand and water constituents of the concrete design mix were
found to exhibit maximum importance on the concrete slump value.