Abstract: In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.
Abstract: Signal processing applications which are iterative in
nature are best represented by data flow graphs (DFG). In these
applications, the maximum sampling frequency is dependent on the
topology of the DFG, the cyclic dependencies in particular. The
determination of the iteration bound, which is the reciprocal of the
maximum sampling frequency, is critical in the process of hardware
implementation of signal processing applications. In this paper, a
novel technique to compute the iteration bound is proposed. This
technique is different from all previously proposed techniques, in the
sense that it is based on the natural flow of tokens into the DFG
rather than the topology of the graph. The proposed algorithm has
lower run-time complexity than all known algorithms. The
performance of the proposed algorithm is illustrated through
analytical analysis of the time complexity, as well as through
simulation of some benchmark problems.
Abstract: The purposes of this paper are to (1) promote
excellence in computer science by suggesting a cohesive innovative
approach to fill well documented deficiencies in current computer
science education, (2) justify (using the authors- and others anecdotal
evidence from both the classroom and the real world) why this
approach holds great potential to successfully eliminate the
deficiencies, (3) invite other professionals to join the authors in proof
of concept research. The authors- experiences, though anecdotal,
strongly suggest that a new approach involving visual modeling
technologies should allow computer science programs to retain a
greater percentage of prospective and declared majors as students
become more engaged learners, more successful problem-solvers,
and better prepared as programmers. In addition, the graduates of
such computer science programs will make greater contributions to
the profession as skilled problem-solvers. Instead of wearily
rememorizing code as they move to the next course, students will
have the problem-solving skills to think and work in more
sophisticated and creative ways.
Abstract: Given a large sparse signal, great wishes are to
reconstruct the signal precisely and accurately from lease number of
measurements as possible as it could. Although this seems possible
by theory, the difficulty is in built an algorithm to perform the
accuracy and efficiency of reconstructing. This paper proposes a new
proved method to reconstruct sparse signal depend on using new
method called Least Support Matching Pursuit (LS-OMP) merge it
with the theory of Partial Knowing Support (PSK) given new method
called Partially Knowing of Least Support Orthogonal Matching
Pursuit (PKLS-OMP).
The new methods depend on the greedy algorithm to compute the
support which depends on the number of iterations. So to make it
faster, the PKLS-OMP adds the idea of partial knowing support of its
algorithm. It shows the efficiency, simplicity, and accuracy to get
back the original signal if the sampling matrix satisfies the Restricted
Isometry Property (RIP).
Simulation results also show that it outperforms many algorithms
especially for compressible signals.
Abstract: This paper addresses the problem of source separation
in images. We propose a FastICA algorithm employing a modified
Gaussian contrast function for the Blind Source Separation.
Experimental result shows that the proposed Modified Gaussian
FastICA is effectively used for Blind Source Separation to obtain
better quality images. In this paper, a comparative study has been
made with other popular existing algorithms. The peak signal to
noise ratio (PSNR) and improved signal to noise ratio (ISNR) are
used as metrics for evaluating the quality of images. The ICA metric
Amari error is also used to measure the quality of separation.
Abstract: K-Means (KM) is considered one of the major
algorithms widely used in clustering. However, it still has some
problems, and one of them is in its initialization step where it is
normally done randomly. Another problem for KM is that it
converges to local minima. Genetic algorithms are one of the
evolutionary algorithms inspired from nature and utilized in the field
of clustering. In this paper, we propose two algorithms to solve the
initialization problem, Genetic Algorithm Initializes KM (GAIK) and
KM Initializes Genetic Algorithm (KIGA). To show the effectiveness
and efficiency of our algorithms, a comparative study was done
among GAIK, KIGA, Genetic-based Clustering Algorithm (GCA),
and FCM [19].
Abstract: In this paper, we propose improved versions of DVHop
algorithm as QDV-Hop algorithm and UDV-Hop algorithm for
better localization without the need for additional range measurement
hardware. The proposed algorithm focuses on third step of DV-Hop,
first error terms from estimated distances between unknown node and
anchor nodes is separated and then minimized. In the QDV-Hop
algorithm, quadratic programming is used to minimize the error to
obtain better localization. However, quadratic programming requires
a special optimization tool box that increases computational
complexity. On the other hand, UDV-Hop algorithm achieves
localization accuracy similar to that of QDV-Hop by solving
unconstrained optimization problem that results in solving a system
of linear equations without much increase in computational
complexity. Simulation results show that the performance of our
proposed schemes (QDV-Hop and UDV-Hop) is superior to DV-Hop
and DV-Hop based algorithms in all considered scenarios.
Abstract: We report in this paper the procedure of a system of
automatic speech recognition based on techniques of the dynamic
programming. The technique of temporal retiming is a technique
used to synchronize between two forms to compare. We will see how
this technique is adapted to the field of the automatic speech
recognition. We will expose, in a first place, the theory of the
function of retiming which is used to compare and to adjust an
unknown form with a whole of forms of reference constituting the
vocabulary of the application. Then we will give, in the second place,
the various algorithms necessary to their implementation on machine.
The algorithms which we will present were tested on part of the
corpus of words in Arab language Arabdic-10 [4] and gave whole
satisfaction. These algorithms are effective insofar as we apply them
to the small ones or average vocabularies.
Abstract: Due to new distributed database applications such as
huge deductive database systems, the search complexity is constantly
increasing and we need better algorithms to speedup traditional
relational database queries. An optimal dynamic programming
method for such high dimensional queries has the big disadvantage of
its exponential order and thus we are interested in semi-optimal but
faster approaches. In this work we present a multi-agent based
mechanism to meet this demand and also compare the result with
some commonly used query optimization algorithms.
Abstract: Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually to some extent the effects of blurred edges and jagged artifacts in the image. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to enhance edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (''jaggies'') along the tangent directions. In order to preserve image features such as edges, angles and textures, the nonlinear diffusion coefficients are locally adjusted according to the first and second order directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.
Abstract: The performance of sensor-less controlled induction
motor drive depends on the accuracy of the estimated speed.
Conventional estimation techniques being mathematically complex
require more execution time resulting in poor dynamic response. The
nonlinear mapping capability and powerful learning algorithms of
neural network provides a promising alternative for on-line speed
estimation. The on-line speed estimator requires the NN model to be
accurate, simpler in design, structurally compact and computationally
less complex to ensure faster execution and effective control in real
time implementation. This in turn to a large extent depends on the
type of Neural Architecture. This paper investigates three types of
neural architectures for on-line speed estimation and their
performance is compared in terms of accuracy, structural
compactness, computational complexity and execution time. The
suitable neural architecture for on-line speed estimation is identified
and the promising results obtained are presented.
Abstract: Edge detection is usually the first step in medical
image processing. However, the difficulty increases when a
conventional kernel-based edge detector is applied to ultrasonic
images with a textural pattern and speckle noise. We designed an
adaptive diffusion filter to remove speckle noise while preserving the
initial edges detected by using a Sobel edge detector. We also propose
a genetic algorithm for edge selection to form complete boundaries of
the detected entities. We designed two fitness functions to evaluate
whether a criterion with a complex edge configuration can render a
better result than a simple criterion such as the strength of gradient.
The edges obtained by using a complex fitness function are thicker and
more fragmented than those obtained by using a simple fitness
function, suggesting that a complex edge selecting scheme is not
necessary for good edge detection in medical ultrasonic images;
instead, a proper noise-smoothing filter is the key.
Abstract: In many applications there is a broad variety of
information relevant to a focal “object" of interest, and the fusion of such heterogeneous data types is desirable for classification and
categorization. While these various data types can sometimes be treated as orthogonal (such as the hull number, superstructure color,
and speed of an oil tanker), there are instances where the inference and the correlation between quantities can provide improved fusion
capabilities (such as the height, weight, and gender of a person). A
service-oriented architecture has been designed and prototyped to
support the fusion of information for such “object-centric" situations.
It is modular, scalable, and flexible, and designed to support new data sources, fusion algorithms, and computational resources without affecting existing services. The architecture is designed to simplify
the incorporation of legacy systems, support exact and probabilistic entity disambiguation, recognize and utilize multiple types of
uncertainties, and minimize network bandwidth requirements.
Abstract: This paper shows possibility of extraction Social,
Group and Individual Mind from Multiple Agents Rule Bases. Types
those Rule bases are selected as two fuzzy systems, namely
Mambdani and Takagi-Sugeno fuzzy system. Their rule bases are
describing (modeling) agent behavior. Modifying of agent behavior
in the time varying environment will be provided by learning fuzzyneural
networks and optimization of their parameters with using
genetic algorithms in development system FUZNET. Finally,
extraction Social, Group and Individual Mind from Multiple Agents
Rule Bases are provided by Cognitive analysis and Matching
criterion.
Abstract: Airbag deployment has been known to be responsible
for huge death, incidental injuries and broken bones due to low crash
severity and wrong deployment decisions. Therefore, the authorities
and industries have been looking for more innovative and intelligent
products to be realized for future enhancements in the vehicle safety
systems (VSSs). Although the VSSs technologies have advanced
considerably, they still face challenges such as how to avoid
unnecessary and untimely airbag deployments that can be hazardous
and fatal. Currently, most of the existing airbag systems deploy
without regard to occupant size and position. As such, this paper will
focus on the occupant and crash sensing performances due to frontal
collisions for the new breed of so called smart airbag systems. It
intends to provide a thorough discussion relating to the occupancy
detection, occupant size classification, occupant off-position
detection to determine safe distance zone for airbag deployment,
crash-severity analysis and airbag decision algorithms via a computer
modeling. The proposed system model consists of three main
modules namely, occupant sensing, crash severity analysis and
decision fusion. The occupant sensing system module utilizes the
weight sensor to determine occupancy, classify the occupant size,
and determine occupant off-position condition to compute safe
distance for airbag deployment. The crash severity analysis module is
used to generate relevant information pertinent to airbag deployment
decision. Outputs from these two modules are fused to the decision
module for correct and efficient airbag deployment action. Computer
modeling work is carried out using Simulink, Stateflow,
SimMechanics and Virtual Reality toolboxes.
Abstract: Recently, much research has been conducted for
security for wireless sensor networks and ubiquitous computing.
Security issues such as authentication and data integrity are major
requirements to construct sensor network systems. Advanced
Encryption Standard (AES) is considered as one of candidate
algorithms for data encryption in wireless sensor networks. In this
paper, we will present the hardware architecture to implement low
power AES crypto module. Our low power AES crypto module has
optimized architecture of data encryption unit and key schedule unit
which could be applicable to wireless sensor networks. We also details
low power design methods used to design our low power AES crypto
module.
Abstract: In the framework of adaptive parametric modelling of images, we propose in this paper a new technique based on the Chandrasekhar fast adaptive filter for texture characterization. An Auto-Regressive (AR) linear model of texture is obtained by scanning the image row by row and modelling this data with an adaptive Chandrasekhar linear filter. The characterization efficiency of the obtained model is compared with the model adapted with the Least Mean Square (LMS) 2-D adaptive algorithm and with the cooccurrence method features. The comparison criteria is based on the computation of a characterization degree using the ratio of "betweenclass" variances with respect to "within-class" variances of the estimated coefficients. Extensive experiments show that the coefficients estimated by the use of Chandrasekhar adaptive filter give better results in texture discrimination than those estimated by other algorithms, even in a noisy context.
Abstract: In this paper, we propose an easily computable proximity index for predicting voltage collapse of a load bus using only measured values of the bus voltage and power; Using these measurements a polynomial of fourth order is obtained by using LES estimation algorithms. The sum of the absolute values of the polynomial coefficient gives an idea of the critical bus. We demonstrate the applicability of our proposed method on 6 bus test system. The results obtained verify its applicability, as well as its accuracy and the simplicity. From this indicator, it is allowed to predict the voltage instability or the proximity of a collapse. Results obtained by the PV curve are compared with corresponding values by QV curves and are observed to be in close agreement.
Abstract: Active vibration isolation systems are less commonly
used than passive systems due to their associated cost and power
requirements. In principle, semi-active isolation systems can deliver
the versatility, adaptability and higher performance of fully active
systems for a fraction of the power consumption. Various semi-active
control algorithms have been suggested in the past. This paper
studies the 4DOF model of semi-active suspension performance
controlled by on–off and continuous skyhook damping control
strategy. The frequency and transient responses of model are
evaluated in terms of body acceleration, roll angle and tire deflection
and are compared with that of a passive damper. The results show
that the semi-active system controlled by skyhook strategy always
provides better isolation than a conventional passively damped
system except at tire natural frequencies.
Abstract: The Minimum Vertex Cover (MVC) problem is a classic
graph optimization NP - complete problem. In this paper a competent
algorithm, called Vertex Support Algorithm (VSA), is designed to
find the smallest vertex cover of a graph. The VSA is tested on a
large number of random graphs and DIMACS benchmark graphs.
Comparative study of this algorithm with the other existing methods
has been carried out. Extensive simulation results show that the VSA
can yield better solutions than other existing algorithms found in the
literature for solving the minimum vertex cover problem.