Abstract: Market based models are frequently used in the resource
allocation on the computational grid. However, as the size of
the grid grows, it becomes difficult for the customer to negotiate
directly with all the providers. Middle agents are introduced to
mediate between the providers and customers and facilitate the
resource allocation process. The most frequently deployed middle
agents are the matchmakers and the brokers. The matchmaking agent
finds possible candidate providers who can satisfy the requirements
of the consumers, after which the customer directly negotiates with
the candidates. The broker agents are mediating the negotiation with
the providers in real time.
In this paper we present a new type of middle agent, the marketmaker.
Its operation is based on two parallel operations - through
the investment process the marketmaker is acquiring resources and
resource reservations in large quantities, while through the resale process
it sells them to the customers. The operation of the marketmaker
is based on the fact that through its global view of the grid it can
perform a more efficient resource allocation than the one possible in
one-to-one negotiations between the customers and providers.
We present the operation and algorithms governing the operation
of the marketmaker agent, contrasting it with the matchmaker and
broker agents. Through a series of simulations in the task oriented
domain we compare the operation of the three agents types. We find
that the use of marketmaker agent leads to a better performance in the
allocation of large tasks and a significant reduction of the messaging
overhead.
Abstract: This paper describes new computer vision algorithms
that have been developed to track moving objects as part of a
long-term study into the design of (semi-)autonomous vehicles. We
present the results of a study to exploit variable kernels for tracking in
video sequences. The basis of our work is the mean shift
object-tracking algorithm; for a moving target, it is usual to define a
rectangular target window in an initial frame, and then process the data
within that window to separate the tracked object from the background
by the mean shift segmentation algorithm. Rather than use the
standard, Epanechnikov kernel, we have used a kernel weighted by the
Chamfer distance transform to improve the accuracy of target
representation and localization, minimising the distance between the
two distributions in RGB color space using the Bhattacharyya
coefficient. Experimental results show the improved tracking
capability and versatility of the algorithm in comparison with results
using the standard kernel. These algorithms are incorporated as part of
a robot test-bed architecture which has been used to demonstrate their
effectiveness.
Abstract: Support vector machines (SVMs) have shown
superior performance compared to other machine learning techniques,
especially in classification problems. Yet one limitation of SVMs is
the lack of an explanation capability which is crucial in some
applications, e.g. in the medical and security domains. In this paper, a
novel approach for eclectic rule-extraction from support vector
machines is presented. This approach utilizes the knowledge acquired
by the SVM and represented in its support vectors as well as the
parameters associated with them. The approach includes three stages;
training, propositional rule-extraction and rule quality evaluation.
Results from four different experiments have demonstrated the value
of the approach for extracting comprehensible rules of high accuracy
and fidelity.
Abstract: Video Mosaicing is the stitching of selected frames of
a video by estimating the camera motion between the frames and
thereby registering successive frames of the video to arrive at the
mosaic. Different techniques have been proposed in the literature for
video mosaicing. Despite of the large number of papers dealing with
techniques to generate mosaic, only a few authors have investigated
conditions under which these techniques generate good estimate of
motion parameters. In this paper, these techniques are studied under
different videos, and the reasons for failures are found. We propose
algorithms with incorporation of outlier removal algorithms for better
estimation of motion parameters.
Abstract: Feature and model selection are in the center of
attention of many researches because of their impact on classifiers-
performance. Both selections are usually performed separately but
recent developments suggest using a combined GA-SVM approach to
perform them simultaneously. This approach improves the
performance of the classifier identifying the best subset of variables
and the optimal parameters- values. Although GA-SVM is an
effective method it is computationally expensive, thus a rough
method can be considered. The paper investigates a joined approach
of Genetic Algorithm and kernel matrix criteria to perform
simultaneously feature and model selection for SVM classification
problem. The purpose of this research is to improve the classification
performance of SVM through an efficient approach, the Kernel
Matrix Genetic Algorithm method (KMGA).
Abstract: This article presents the results using a parametric approach and a Wavelet Transform in analysing signals emitting from the sperm whale. The extraction of intrinsic characteristics of these unique signals emitted by marine mammals is still at present a difficult exercise for various reasons: firstly, it concerns non-stationary signals, and secondly, these signals are obstructed by interfering background noise. In this article, we compare the advantages and disadvantages of both methods: Auto Regressive models and Wavelet Transform. These approaches serve as an alternative to the commonly used estimators which are based on the Fourier Transform for which the hypotheses necessary for its application are in certain cases, not sufficiently proven. These modern approaches provide effective results particularly for the periodic tracking of the signal's characteristics and notably when the signal-to-noise ratio negatively effects signal tracking. Our objectives are twofold. Our first goal is to identify the animal through its acoustic signature. This includes recognition of the marine mammal species and ultimately of the individual animal (within the species). The second is much more ambitious and directly involves the intervention of cetologists to study the sounds emitted by marine mammals in an effort to characterize their behaviour. We are working on an approach based on the recordings of marine mammal signals and the findings from this data result from the Wavelet Transform. This article will explore the reasons for using this approach. In addition, thanks to the use of new processors, these algorithms once heavy in calculation time can be integrated in a real-time system.
Abstract: The controllable electrical loss which consists of the
copper loss and iron loss can be minimized by the optimal control of
the armature current vector. The control algorithm of current vector
minimizing the electrical loss is proposed and the optimal current
vector can be decided according to the operating speed and the load
conditions. The proposed control algorithm is applied to the
experimental PM motor drive system and this paper presents a
modern approach of speed control for permanent magnet
synchronous motor (PMSM) applied for Electric Vehicle using a
nonlinear control. The regulation algorithms are based on the
feedback linearization technique. The direct component of the current
is controlled to be zero which insures the maximum torque operation.
The near unity power factor operation is also achieved. More over,
among EV-s motor electric propulsion features, the energy efficiency
is a basic characteristic that is influenced by vehicle dynamics and
system architecture. For this reason, the EV dynamics are taken into
account.
Abstract: In this paper, a new robust audio fingerprinting
algorithm in MP3 compressed domain is proposed with high
robustness to time scale modification (TSM). Instead of simply
employing short-term information of the MP3 stream, the new
algorithm extracts the long-term features in MP3 compressed domain
by using the modulation frequency analysis. Our experiment has
demonstrated that the proposed method can achieve a hit rate of
above 95% in audio retrieval and resist the attack of 20% TSM. It has
lower bit error rate (BER) performance compared to the other
algorithms. The proposed algorithm can also be used in other
compressed domains, such as AAC.
Abstract: In this paper, we propose a novel spatiotemporal fuzzy
based algorithm for noise filtering of image sequences. Our proposed algorithm uses adaptive weights based on a triangular membership
functions. In this algorithm median filter is used to suppress noise.
Experimental results show when the images are corrupted by highdensity
Salt and Pepper noise, our fuzzy based algorithm for noise filtering of image sequences, are much more effective in suppressing
noise and preserving edges than the previously reported algorithms such as [1-7]. Indeed, assigned weights to noisy pixels are very
adaptive so that they well make use of correlation of pixels. On the other hand, the motion estimation methods are erroneous and in highdensity noise they may degrade the filter performance. Therefore, our
proposed fuzzy algorithm doesn-t need any estimation of motion trajectory. The proposed algorithm admissibly removes noise without having any knowledge of Salt and Pepper noise density.
Abstract: The main goal of this work is to propose a way for
combined use of two nontraditional algorithms by solving topological
problems on telecommunications concentrator networks. The
algorithms suggested are the Simulated Annealing algorithm and the
Genetic Algorithm. The Algorithm of Simulated Annealing unifies
the well known local search algorithms. In addition - Simulated
Annealing allows acceptation of moves in the search space witch lead
to decisions with higher cost in order to attempt to overcome any
local minima obtained. The Genetic Algorithm is a heuristic approach
witch is being used in wide areas of optimization works. In the last
years this approach is also widely implemented in
Telecommunications Networks Planning. In order to solve less or
more complex planning problem it is important to find the most
appropriate parameters for initializing the function of the algorithm.
Abstract: Nowadays, the rapid development of multimedia
and internet allows for wide distribution of digital media data.
It becomes much easier to edit, modify and duplicate digital
information Besides that, digital documents are also easy to
copy and distribute, therefore it will be faced by many
threatens. It-s a big security and privacy issue with the large
flood of information and the development of the digital
format, it become necessary to find appropriate protection
because of the significance, accuracy and sensitivity of the
information. Nowadays protection system classified with more
specific as hiding information, encryption information, and
combination between hiding and encryption to increase information
security, the strength of the information hiding science is due to the
non-existence of standard algorithms to be used in hiding secret
messages. Also there is randomness in hiding methods such as
combining several media (covers) with different methods to pass a
secret message. In addition, there are no formal methods to be
followed to discover the hidden data. For this reason, the task of this
research becomes difficult. In this paper, a new system of information
hiding is presented. The proposed system aim to hidden information
(data file) in any execution file (EXE) and to detect the hidden file
and we will see implementation of steganography system which
embeds information in an execution file. (EXE) files have been
investigated. The system tries to find a solution to the size of the
cover file and making it undetectable by anti-virus software. The
system includes two main functions; first is the hiding of the
information in a Portable Executable File (EXE), through the
execution of four process (specify the cover file, specify the
information file, encryption of the information, and hiding the
information) and the second function is the extraction of the hiding
information through three process (specify the steno file, extract the
information, and decryption of the information). The system has
achieved the main goals, such as make the relation of the size of the
cover file and the size of information independent and the result file
does not make any conflict with anti-virus software.
Abstract: This study analyzes the effect of discretization on
classification of datasets including continuous valued features. Six
datasets from UCI which containing continuous valued features are
discretized with entropy-based discretization method. The
performance improvement between the dataset with original features
and the dataset with discretized features is compared with k-nearest
neighbors, Naive Bayes, C4.5 and CN2 data mining classification
algorithms. As the result the classification accuracies of the six
datasets are improved averagely by 1.71% to 12.31%.
Abstract: In the control theory one attempts to find a controller
that provides the best possible performance with respect to some
given measures of performance. There are many sorts of controllers
e.g. a typical PID controller, LQR controller, Fuzzy controller etc. In
the paper will be introduced polynomial controller with novel tuning
method which is based on the special pole placement encoding
scheme and optimization by Genetic Algorithms (GA). The examples
will show the performance of the novel designed polynomial
controller with comparison to common PID controller.
Abstract: In this article two algorithms, one based on variation iteration method and the other on Adomian's decomposition method, are developed to find the numerical solution of an initial value problem involving the non linear integro differantial equation where R is a nonlinear operator that contains partial derivatives with respect to x. Special cases of the integro-differential equation are solved using the algorithms. The numerical solutions are compared with analytical solutions. The results show that these two methods are efficient and accurate with only two or three iterations
Abstract: Most neural network (NN) models of human category learning use a gradient-based learning method, which assumes that locally-optimal changes are made to model parameters on each learning trial. This method tends to under predict variability in individual-level cognitive processes. In addition many recent models of human category learning have been criticized for not being able to replicate rapid changes in categorization accuracy and attention processes observed in empirical studies. In this paper we introduce stochastic learning algorithms for NN models of human category learning and show that use of the algorithms can result in (a) rapid changes in accuracy and attention allocation, and (b) different learning trajectories and more realistic variability at the individual-level.
Abstract: Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. AIMD (Additive Increase Multiplicative Decrease) is the best algorithm among the set of liner algorithms because it reflects good efficiency as well as good fairness. Our control model is based on the assumption of the original AIMD algorithm; we show that both efficiency and fairness of AIMD can be improved. We call our approach is New AIMD. We present experimental results with TCP that match the expectation of our theoretical analysis.
Abstract: Jayanti-s algorithm is one of the best known abortable mutual exclusion algorithms. This work is an attempt to overcome an already known limitation of the algorithm while preserving its all important properties and elegance. The limitation is that the token number used to assign process identification number to new incoming processes is unbounded. We have used a suitably adapted alternative data structure, in order to completely eliminate the use of token number, in the algorithm.
Abstract: One of the major problems in genomic field is to perform sequence comparison on DNA and protein sequences. Executing sequence comparison on the DNA and protein data is a computationally intensive task. Sequence comparison is the basic step for all algorithms in protein sequences similarity. Parallel computing is an attractive solution to provide the computational power needed to speedup the lengthy process of the sequence comparison. Our main research is to enhance the protein sequence algorithm using dynamic programming method. In our approach, we parallelize the dynamic programming algorithm using multithreaded program to perform the sequence comparison and also developed a distributed protein database among many PCs using Remote Method Interface (RMI). As a result, we showed how different sizes of protein sequences data and computation of scoring matrix of these protein sequence on different number of processors affected the processing time and speed, as oppose to sequential processing.
Abstract: Several optimization algorithms specifically applied to
the problem of Operation Planning of Hydrothermal Power Systems
have been developed and are used. Although providing solutions to
various problems encountered, these algorithms have some
weaknesses, difficulties in convergence, simplification of the original
formulation of the problem, or owing to the complexity of the
objective function. Thus, this paper presents the development of a
computational tool for solving optimization problem identified and to
provide the User an easy handling. Adopted as intelligent
optimization technique, Genetic Algorithms and programming
language Java. First made the modeling of the chromosomes, then
implemented the function assessment of the problem and the
operators involved, and finally the drafting of the graphical interfaces
for access to the User. The program has managed to relate a coherent
performance in problem resolution without the need for
simplification of the calculations together with the ease of
manipulating the parameters of simulation and visualization of output
results.
Abstract: With the widespread growth of applications of
Wireless Sensor Networks (WSNs), the need for reliable security
mechanisms these networks has increased manifold. Many security
solutions have been proposed in the domain of WSN so far. These
solutions are usually based on well-known cryptographic
algorithms.
In this paper, we have made an effort to survey well known
security issues in WSNs and study the behavior of WSN nodes that
perform public key cryptographic operations. We evaluate time
and power consumption of public key cryptography algorithm for
signature and key management by simulation.