Abstract: This article presents the results using a parametric approach and a Wavelet Transform in analysing signals emitting from the sperm whale. The extraction of intrinsic characteristics of these unique signals emitted by marine mammals is still at present a difficult exercise for various reasons: firstly, it concerns non-stationary signals, and secondly, these signals are obstructed by interfering background noise. In this article, we compare the advantages and disadvantages of both methods: Auto Regressive models and Wavelet Transform. These approaches serve as an alternative to the commonly used estimators which are based on the Fourier Transform for which the hypotheses necessary for its application are in certain cases, not sufficiently proven. These modern approaches provide effective results particularly for the periodic tracking of the signal's characteristics and notably when the signal-to-noise ratio negatively effects signal tracking. Our objectives are twofold. Our first goal is to identify the animal through its acoustic signature. This includes recognition of the marine mammal species and ultimately of the individual animal (within the species). The second is much more ambitious and directly involves the intervention of cetologists to study the sounds emitted by marine mammals in an effort to characterize their behaviour. We are working on an approach based on the recordings of marine mammal signals and the findings from this data result from the Wavelet Transform. This article will explore the reasons for using this approach. In addition, thanks to the use of new processors, these algorithms once heavy in calculation time can be integrated in a real-time system.
Abstract: The controllable electrical loss which consists of the
copper loss and iron loss can be minimized by the optimal control of
the armature current vector. The control algorithm of current vector
minimizing the electrical loss is proposed and the optimal current
vector can be decided according to the operating speed and the load
conditions. The proposed control algorithm is applied to the
experimental PM motor drive system and this paper presents a
modern approach of speed control for permanent magnet
synchronous motor (PMSM) applied for Electric Vehicle using a
nonlinear control. The regulation algorithms are based on the
feedback linearization technique. The direct component of the current
is controlled to be zero which insures the maximum torque operation.
The near unity power factor operation is also achieved. More over,
among EV-s motor electric propulsion features, the energy efficiency
is a basic characteristic that is influenced by vehicle dynamics and
system architecture. For this reason, the EV dynamics are taken into
account.
Abstract: In this paper, a new robust audio fingerprinting
algorithm in MP3 compressed domain is proposed with high
robustness to time scale modification (TSM). Instead of simply
employing short-term information of the MP3 stream, the new
algorithm extracts the long-term features in MP3 compressed domain
by using the modulation frequency analysis. Our experiment has
demonstrated that the proposed method can achieve a hit rate of
above 95% in audio retrieval and resist the attack of 20% TSM. It has
lower bit error rate (BER) performance compared to the other
algorithms. The proposed algorithm can also be used in other
compressed domains, such as AAC.
Abstract: In this paper, we propose a novel spatiotemporal fuzzy
based algorithm for noise filtering of image sequences. Our proposed algorithm uses adaptive weights based on a triangular membership
functions. In this algorithm median filter is used to suppress noise.
Experimental results show when the images are corrupted by highdensity
Salt and Pepper noise, our fuzzy based algorithm for noise filtering of image sequences, are much more effective in suppressing
noise and preserving edges than the previously reported algorithms such as [1-7]. Indeed, assigned weights to noisy pixels are very
adaptive so that they well make use of correlation of pixels. On the other hand, the motion estimation methods are erroneous and in highdensity noise they may degrade the filter performance. Therefore, our
proposed fuzzy algorithm doesn-t need any estimation of motion trajectory. The proposed algorithm admissibly removes noise without having any knowledge of Salt and Pepper noise density.
Abstract: The main goal of this work is to propose a way for
combined use of two nontraditional algorithms by solving topological
problems on telecommunications concentrator networks. The
algorithms suggested are the Simulated Annealing algorithm and the
Genetic Algorithm. The Algorithm of Simulated Annealing unifies
the well known local search algorithms. In addition - Simulated
Annealing allows acceptation of moves in the search space witch lead
to decisions with higher cost in order to attempt to overcome any
local minima obtained. The Genetic Algorithm is a heuristic approach
witch is being used in wide areas of optimization works. In the last
years this approach is also widely implemented in
Telecommunications Networks Planning. In order to solve less or
more complex planning problem it is important to find the most
appropriate parameters for initializing the function of the algorithm.
Abstract: Nowadays, the rapid development of multimedia
and internet allows for wide distribution of digital media data.
It becomes much easier to edit, modify and duplicate digital
information Besides that, digital documents are also easy to
copy and distribute, therefore it will be faced by many
threatens. It-s a big security and privacy issue with the large
flood of information and the development of the digital
format, it become necessary to find appropriate protection
because of the significance, accuracy and sensitivity of the
information. Nowadays protection system classified with more
specific as hiding information, encryption information, and
combination between hiding and encryption to increase information
security, the strength of the information hiding science is due to the
non-existence of standard algorithms to be used in hiding secret
messages. Also there is randomness in hiding methods such as
combining several media (covers) with different methods to pass a
secret message. In addition, there are no formal methods to be
followed to discover the hidden data. For this reason, the task of this
research becomes difficult. In this paper, a new system of information
hiding is presented. The proposed system aim to hidden information
(data file) in any execution file (EXE) and to detect the hidden file
and we will see implementation of steganography system which
embeds information in an execution file. (EXE) files have been
investigated. The system tries to find a solution to the size of the
cover file and making it undetectable by anti-virus software. The
system includes two main functions; first is the hiding of the
information in a Portable Executable File (EXE), through the
execution of four process (specify the cover file, specify the
information file, encryption of the information, and hiding the
information) and the second function is the extraction of the hiding
information through three process (specify the steno file, extract the
information, and decryption of the information). The system has
achieved the main goals, such as make the relation of the size of the
cover file and the size of information independent and the result file
does not make any conflict with anti-virus software.
Abstract: This study analyzes the effect of discretization on
classification of datasets including continuous valued features. Six
datasets from UCI which containing continuous valued features are
discretized with entropy-based discretization method. The
performance improvement between the dataset with original features
and the dataset with discretized features is compared with k-nearest
neighbors, Naive Bayes, C4.5 and CN2 data mining classification
algorithms. As the result the classification accuracies of the six
datasets are improved averagely by 1.71% to 12.31%.
Abstract: In the control theory one attempts to find a controller
that provides the best possible performance with respect to some
given measures of performance. There are many sorts of controllers
e.g. a typical PID controller, LQR controller, Fuzzy controller etc. In
the paper will be introduced polynomial controller with novel tuning
method which is based on the special pole placement encoding
scheme and optimization by Genetic Algorithms (GA). The examples
will show the performance of the novel designed polynomial
controller with comparison to common PID controller.
Abstract: In this article two algorithms, one based on variation iteration method and the other on Adomian's decomposition method, are developed to find the numerical solution of an initial value problem involving the non linear integro differantial equation where R is a nonlinear operator that contains partial derivatives with respect to x. Special cases of the integro-differential equation are solved using the algorithms. The numerical solutions are compared with analytical solutions. The results show that these two methods are efficient and accurate with only two or three iterations
Abstract: Most neural network (NN) models of human category learning use a gradient-based learning method, which assumes that locally-optimal changes are made to model parameters on each learning trial. This method tends to under predict variability in individual-level cognitive processes. In addition many recent models of human category learning have been criticized for not being able to replicate rapid changes in categorization accuracy and attention processes observed in empirical studies. In this paper we introduce stochastic learning algorithms for NN models of human category learning and show that use of the algorithms can result in (a) rapid changes in accuracy and attention allocation, and (b) different learning trajectories and more realistic variability at the individual-level.
Abstract: Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. AIMD (Additive Increase Multiplicative Decrease) is the best algorithm among the set of liner algorithms because it reflects good efficiency as well as good fairness. Our control model is based on the assumption of the original AIMD algorithm; we show that both efficiency and fairness of AIMD can be improved. We call our approach is New AIMD. We present experimental results with TCP that match the expectation of our theoretical analysis.
Abstract: Jayanti-s algorithm is one of the best known abortable mutual exclusion algorithms. This work is an attempt to overcome an already known limitation of the algorithm while preserving its all important properties and elegance. The limitation is that the token number used to assign process identification number to new incoming processes is unbounded. We have used a suitably adapted alternative data structure, in order to completely eliminate the use of token number, in the algorithm.
Abstract: One of the major problems in genomic field is to perform sequence comparison on DNA and protein sequences. Executing sequence comparison on the DNA and protein data is a computationally intensive task. Sequence comparison is the basic step for all algorithms in protein sequences similarity. Parallel computing is an attractive solution to provide the computational power needed to speedup the lengthy process of the sequence comparison. Our main research is to enhance the protein sequence algorithm using dynamic programming method. In our approach, we parallelize the dynamic programming algorithm using multithreaded program to perform the sequence comparison and also developed a distributed protein database among many PCs using Remote Method Interface (RMI). As a result, we showed how different sizes of protein sequences data and computation of scoring matrix of these protein sequence on different number of processors affected the processing time and speed, as oppose to sequential processing.
Abstract: Several optimization algorithms specifically applied to
the problem of Operation Planning of Hydrothermal Power Systems
have been developed and are used. Although providing solutions to
various problems encountered, these algorithms have some
weaknesses, difficulties in convergence, simplification of the original
formulation of the problem, or owing to the complexity of the
objective function. Thus, this paper presents the development of a
computational tool for solving optimization problem identified and to
provide the User an easy handling. Adopted as intelligent
optimization technique, Genetic Algorithms and programming
language Java. First made the modeling of the chromosomes, then
implemented the function assessment of the problem and the
operators involved, and finally the drafting of the graphical interfaces
for access to the User. The program has managed to relate a coherent
performance in problem resolution without the need for
simplification of the calculations together with the ease of
manipulating the parameters of simulation and visualization of output
results.
Abstract: With the widespread growth of applications of
Wireless Sensor Networks (WSNs), the need for reliable security
mechanisms these networks has increased manifold. Many security
solutions have been proposed in the domain of WSN so far. These
solutions are usually based on well-known cryptographic
algorithms.
In this paper, we have made an effort to survey well known
security issues in WSNs and study the behavior of WSN nodes that
perform public key cryptographic operations. We evaluate time
and power consumption of public key cryptography algorithm for
signature and key management by simulation.
Abstract: In this paper, we present a novel objective nonreference
performance assessment algorithm for image fusion. It takes
into account local measurements to estimate how well the important
information in the source images is represented by the fused image.
The metric is based on the Universal Image Quality Index and uses
the similarity between blocks of pixels in the input images and the
fused image as the weighting factors for the metrics. Experimental
results confirm that the values of the proposed metrics correlate well
with the subjective quality of the fused images, giving a significant
improvement over standard measures based on mean squared error
and mutual information.
Abstract: Ant Colony Algorithms have been applied to difficult
combinatorial optimization problems such as the travelling salesman
problem and the quadratic assignment problem. In this paper gridbased
and random-based ant colony algorithms are proposed for
automatic 3D hose routing and their pros and cons are discussed. The
algorithm uses the tessellated format for the obstacles and the
generated hoses in order to detect collisions. The representation of
obstacles and hoses in the tessellated format greatly helps the
algorithm towards handling free-form objects and speeds up
computation. The performance of algorithm has been tested on a
number of 3D models.
Abstract: Designing, implementing, and debugging concurrency
control algorithms in a real system is a complex, tedious, and errorprone
process. Further, understanding concurrency control
algorithms and distributed computations is itself a difficult task.
Visualization can help with both of these problems. Thus, we have
developed an exploratory environment in which people can prototype
and test various versions of concurrency control algorithms, study
and debug distributed computations, and view performance statistics
of distributed systems. In this paper, we describe the exploratory
environment and show how it can be used to explore concurrency
control algorithms for the interactive steering of distributed
computations.
Abstract: A recent neurospiking coding scheme for feature extraction from biosonar echoes of various plants is examined with avariety of stochastic classifiers. Feature vectors derived are employedin well-known stochastic classifiers, including nearest-neighborhood,single Gaussian and a Gaussian mixture with EM optimization.Classifiers' performances are evaluated by using cross-validation and bootstrapping techniques. It is shown that the various classifers perform equivalently and that the modified preprocessing configuration yields considerably improved results.
Abstract: Data mining incorporates a group of statistical
methods used to analyze a set of information, or a data set. It operates
with models and algorithms, which are powerful tools with the great
potential. They can help people to understand the patterns in certain
chunk of information so it is obvious that the data mining tools have
a wide area of applications. For example in the theoretical chemistry
data mining tools can be used to predict moleculeproperties or
improve computer-assisted drug design. Classification analysis is one
of the major data mining methodologies. The aim of thecontribution
is to create a classification model, which would be able to deal with a
huge data set with high accuracy. For this purpose logistic regression,
Bayesian logistic regression and random forest models were built
using R software. TheBayesian logistic regression in Latent GOLD
software was created as well. These classification methods belong to
supervised learning methods.
It was necessary to reduce data matrix dimension before construct
models and thus the factor analysis (FA) was used. Those models
were applied to predict the biological activity of molecules, potential
new drug candidates.