Abstract: Discrimination between different classes of environmental
sounds is the goal of our work. The use of a sound recognition
system can offer concrete potentialities for surveillance and
security applications. The first paper contribution to this research
field is represented by a thorough investigation of the applicability
of state-of-the-art audio features in the domain of environmental
sound recognition. Additionally, a set of novel features obtained by
combining the basic parameters is introduced. The quality of the
features investigated is evaluated by a HMM-based classifier to which
a great interest was done. In fact, we propose to use a Multi-Style
training system based on HMMs: one recognizer is trained on a
database including different levels of background noises and is used
as a universal recognizer for every environment. In order to enhance
the system robustness by reducing the environmental variability, we
explore different adaptation algorithms including Maximum Likelihood
Linear Regression (MLLR), Maximum A Posteriori (MAP)
and the MAP/MLLR algorithm that combines MAP and MLLR.
Experimental evaluation shows that a rather good recognition rate
can be reached, even under important noise degradation conditions
when the system is fed by the convenient set of features.
Abstract: We report in this paper the model adopted by our
system of continuous speech recognition in Arab language SySRA
and the results obtained until now. This system uses the database
Arabdic-10 which is a corpus of word for the Arab language and
which was manually segmented. Phonetic decoding is represented
by an expert system where the knowledge base is translated in the
form of production rules. This expert system transforms a vocal
signal into a phonetic lattice. The higher level of the system takes
care of the recognition of the lattice thus obtained by deferring it in
the form of written sentences (orthographical Form). This level
contains initially the lexical analyzer which is not other than the
module of recognition. We subjected this analyzer to a set of
spectrograms obtained by dictating a score of sentences in Arab
language. The rate of recognition of these sentences is about 70%
which is, to our knowledge, the best result for the recognition of the
Arab language. The test set consists of twenty sentences from four
speakers not having taken part in the training.
Abstract: Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Abstract: In this note, the robust static output feedback
stabilisation of an induction machine is addressed. The machine is
described by a non homogenous bilinear model with structural
uncertainties, and the feedback gain is computed via an iterative LMI
(ILMI) algorithm.
Abstract: Support Vector Domain Description (SVDD) is one of the best-known one-class support vector learning methods, in which one tries the strategy of using balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. As all kernel-based learning algorithms its performance depends heavily on the proper choice of the kernel parameter. This paper proposes a new approach to select kernel's parameter based on maximizing the distance between both gravity centers of normal and abnormal classes, and at the same time minimizing the variance within each class. The performance of the proposed algorithm is evaluated on several benchmarks. The experimental results demonstrate the feasibility and the effectiveness of the presented method.
Abstract: In this paper we consider the problem of change
detection and non stationary signals tracking. Using parametric
estimation of signals based on least square lattice adaptive filters we
consider for change detection statistical parametric methods using
likelihood ratio and hypothesis tests. In order to track signals
dynamics, we introduce a compensation procedure in the adaptive
estimation. This will improve the adaptive estimation performances
and fasten it-s convergence after changes detection.
Abstract: Electrocardiogram (ECG) data compression algorithm
is needed that will reduce the amount of data to be transmitted, stored
and analyzed, but without losing the clinical information content. A
wavelet ECG data codec based on the Set Partitioning In Hierarchical
Trees (SPIHT) compression algorithm is proposed in this paper. The
SPIHT algorithm has achieved notable success in still image coding.
We modified the algorithm for the one-dimensional (1-D) case and
applied it to compression of ECG data.
By this compression method, small percent root mean square
difference (PRD) and high compression ratio with low
implementation complexity are achieved. Experiments on selected
records from the MIT-BIH arrhythmia database revealed that the
proposed codec is significantly more efficient in compression and in
computation than previously proposed ECG compression schemes.
Compression ratios of up to 48:1 for ECG signals lead to acceptable
results for visual inspection.
Abstract: We study in this paper the effect of the scene
changing on image sequences coding system using Embedded
Zerotree Wavelet (EZW). The scene changing considered here is the
full motion which may occurs. A special image sequence is generated
where the scene changing occurs randomly. Two scenarios are
considered: In the first scenario, the system must provide the
reconstruction quality as best as possible by the management of the
bit rate (BR) while the scene changing occurs. In the second scenario,
the system must keep the bit rate as constant as possible by the
management of the reconstruction quality. The first scenario may be
motivated by the availability of a large band pass transmission
channel where an increase of the bit rate may be possible to keep the
reconstruction quality up to a given threshold. The second scenario
may be concerned by the narrow band pass transmission channel
where an increase of the bit rate is not possible. In this last case,
applications for which the reconstruction quality is not a constraint
may be considered. The simulations are performed with five scales
wavelet decomposition using the 9/7-tap filter bank biorthogonal
wavelet. The entropy coding is performed using a specific defined
binary code book and EZW algorithm. Experimental results are
presented and compared to LEAD H263 EVAL. It is shown that if
the reconstruction quality is the constraint, the system increases the
bit rate to obtain the required quality. In the case where the bit rate
must be constant, the system is unable to provide the required quality
if the scene change occurs; however, the system is able to improve
the quality while the scene changing disappears.
Abstract: In this paper, we investigate the study of techniques
for scheduling users for resource allocation in the case of multiple
input and multiple output (MIMO) packet transmission systems. In
these systems, transmit antennas are assigned to one user or
dynamically to different users using spatial multiplexing. The
allocation of all transmit antennas to one user cannot take full
advantages of multi-user diversity. Therefore, we developed the case
when resources are allocated dynamically. At each time slot users
have to feed back their channel information on an uplink feedback
channel. Channel information considered available in the schedulers
is the zero forcing (ZF) post detection signal to interference plus
noise ratio. Our analysis study concerns the round robin and the
opportunistic schemes.
In this paper, we present an overview and a complete capacity
analysis of these schemes. The main results in our study are to give
an analytical form of system capacity using the ZF receiver at the
user terminal. Simulations have been carried out to validate all
proposed analytical solutions and to compare the performance of
these schemes.
Abstract: Information of nodes’ locations is an important
criterion for lots of applications in Wireless Sensor Networks. In the
hop-based range-free localization methods, anchors transmit the
localization messages counting a hop count value to the whole
network. Each node receives this message and calculates its own
distance with anchor in hops and then approximates its own position.
However the estimative distances can provoke large error, and affect
the localization precision. To solve the problem, this paper proposes
an algorithm, which makes the unknown nodes fix the nearest anchor
as a reference and select two other anchors which are the most
accurate to achieve the estimated location. Compared to the DV-Hop
algorithm, experiment results illustrate that proposed algorithm has
less average localization error and is more effective.
Abstract: In Multiple Sclerosis, pathological changes in the
brain results in deviations in signal intensity on Magnetic Resonance
Images (MRI). Quantitative analysis of these changes and their
correlation with clinical finding provides important information for
diagnosis. This constitutes the objective of our work. A new approach
is developed. After the enhancement of images contrast and the brain
extraction by mathematical morphology algorithm, we proceed to the
brain segmentation. Our approach is based on building statistical
model from data itself, for normal brain MRI and including clustering
tissue type. Then we detect signal abnormalities (MS lesions) as a
rejection class containing voxels that are not explained by the built
model. We validate the method on MR images of Multiple Sclerosis
patients by comparing its results with those of human expert
segmentation.