Abstract: Multiple-input multiple-output (MIMO) systems are
widely in use to improve quality, reliability of wireless transmission
and increase the spectral efficiency. However in MIMO systems,
multiple copies of data are received after experiencing various
channel effects. The limitations on account of complexity due to
number of antennas in case of conventional decoding techniques have
been looked into. Accordingly we propose a modified sphere decoder
(MSD-1) algorithm with lower complexity and give rise to system
with high spectral efficiency. With the aim to increase signal
diversity we apply rotated quadrature amplitude modulation (QAM)
constellation in multi dimensional space. Finally, we propose a new
architecture involving space time trellis code (STTC) concatenated
with space time block code (STBC) using MSD-1 at the receiver for
improving system performance. The system gains have been verified
with channel state information (CSI) errors.
Abstract: This paper presents a novel method for inferring the
odor based on neural activities observed from rats- main olfactory
bulbs. Multi-channel extra-cellular single unit recordings were done
by micro-wire electrodes (tungsten, 50μm, 32 channels) implanted in
the mitral/tufted cell layers of the main olfactory bulb of anesthetized
rats to obtain neural responses to various odors. Neural response
as a key feature was measured by substraction of neural firing rate
before stimulus from after. For odor inference, we have developed a
decoding method based on the maximum likelihood (ML) estimation.
The results have shown that the average decoding accuracy is about
100.0%, 96.0%, 84.0%, and 100.0% with four rats, respectively. This
work has profound implications for a novel brain-machine interface
system for odor inference.
Abstract: The job shop scheduling problem (JSSP) is a
notoriously difficult problem in combinatorial optimization. This
paper presents a hybrid artificial immune system for the JSSP with the
objective of minimizing makespan. The proposed approach combines
the artificial immune system, which has a powerful global exploration
capability, with the local search method, which can exploit the optimal
antibody. The antibody coding scheme is based on the operation based
representation. The decoding procedure limits the search space to the
set of full active schedules. In each generation, a local search heuristic
based on the neighborhood structure proposed by Nowicki and
Smutnicki is applied to improve the solutions. The approach is tested
on 43 benchmark problems taken from the literature and compared
with other approaches. The computation results validate the
effectiveness of the proposed algorithm.
Abstract: In this paper, encrypted audio communications based on synchronization of coupled unified chaotic systems in master-slave configuration is numerically studied. We transmit the encrypted audio messages by using two unsecure channels. Encoding, transmission, and decoding audio messages in chaotic communication is presented.
Abstract: This paper introduces two decoders for binary linear
codes based on Metaheuristics. The first one uses a genetic algorithm
and the second is based on a combination genetic algorithm with
a feed forward neural network. The decoder based on the genetic
algorithms (DAG) applied to BCH and convolutional codes give good
performances compared to Chase-2 and Viterbi algorithm respectively
and reach the performances of the OSD-3 for some Residue
Quadratic (RQ) codes. This algorithm is less complex for linear
block codes of large block length; furthermore their performances
can be improved by tuning the decoder-s parameters, in particular the
number of individuals by population and the number of generations.
In the second algorithm, the search space, in contrast to DAG which
was limited to the code word space, now covers the whole binary
vector space. It tries to elude a great number of coding operations
by using a neural network. This reduces greatly the complexity of
the decoder while maintaining comparable performances.
Abstract: In this paper, we present an innovative scheme of
blindly extracting message bits from an image distorted by an attack.
Support Vector Machine (SVM) is used to nonlinearly classify the
bits of the embedded message. Traditionally, a hard decoder is used
with the assumption that the underlying modeling of the Discrete
Cosine Transform (DCT) coefficients does not appreciably change.
In case of an attack, the distribution of the image coefficients is
heavily altered. The distribution of the sufficient statistics at the
receiving end corresponding to the antipodal signals overlap and a
simple hard decoder fails to classify them properly. We are
considering message retrieval of antipodal signal as a binary
classification problem. Machine learning techniques like SVM is
used to retrieve the message, when certain specific class of attacks is
most probable. In order to validate SVM based decoding scheme, we
have taken Gaussian noise as a test case. We generate a data set using
125 images and 25 different keys. Polynomial kernel of SVM has
achieved 100 percent accuracy on test data.
Abstract: In these days, multimedia data is transmitted and
processed in compressed format. Due to the decoding procedure and
filtering for edge detection, the feature extraction process of MPEG-7
Edge Histogram Descriptor is time-consuming as well as
computationally expensive. To improve efficiency of compressed
image retrieval, we propose a new edge histogram generation
algorithm in DCT domain in this paper. Using the edge information
provided by only two AC coefficients of DCT coefficients, we can get
edge directions and strengths directly in DCT domain. The
experimental results demonstrate that our system has good
performance in terms of retrieval efficiency and effectiveness.
Abstract: We report in this paper the procedure of a system of
automatic speech recognition based on techniques of the dynamic
programming. The technique of temporal retiming is a technique
used to synchronize between two forms to compare. We will see how
this technique is adapted to the field of the automatic speech
recognition. We will expose, in a first place, the theory of the
function of retiming which is used to compare and to adjust an
unknown form with a whole of forms of reference constituting the
vocabulary of the application. Then we will give, in the second place,
the various algorithms necessary to their implementation on machine.
The algorithms which we will present were tested on part of the
corpus of words in Arab language Arabdic-10 [4] and gave whole
satisfaction. These algorithms are effective insofar as we apply them
to the small ones or average vocabularies.
Abstract: Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.
Abstract: In this paper, we have compared the performance of a Turbo and Trellis coded optical code division multiple access (OCDMA) system. The comparison of the two codes has been accomplished by employing optical orthogonal codes (OOCs). The Bit Error Rate (BER) performances have been compared by varying the code weights of address codes employed by the system. We have considered the effects of optical multiple access interference (OMAI), thermal noise and avalanche photodiode (APD) detector noise. Analysis has been carried out for the system with and without double optical hard limiter (DHL). From the simulation results it is observed that a better and distinct comparison can be drawn between the performance of Trellis and Turbo coded systems, at lower code weights of optical orthogonal codes for a fixed number of users. The BER performance of the Turbo coded system is found to be better than the Trellis coded system for all code weights that have been considered for the simulation. Nevertheless, the Trellis coded OCDMA system is found to be better than the uncoded OCDMA system. Trellis coded OCDMA can be used in systems where decoding time has to be kept low, bandwidth is limited and high reliability is not a crucial factor as in local area networks. Also the system hardware is less complex in comparison to the Turbo coded system. Trellis coded OCDMA system can be used without significant modification of the existing chipsets. Turbo-coded OCDMA can however be employed in systems where high reliability is needed and bandwidth is not a limiting factor.
Abstract: In this paper, we proposed a novel receiver algorithm
for coherent underwater acoustic communications. The proposed
receiver is composed of three parts: (1) Doppler tracking and
correction, (2) Time reversal channel estimation and combining, and
(3) Joint iterative equalization and decoding (JIED). To reduce
computational complexity and optimize the equalization algorithm,
Time reversal (TR) channel estimation and combining is adopted to
simplify multi-channel adaptive decision feedback equalizer (ADFE)
into single channel ADFE without reducing the system performance.
Simultaneously, the turbo theory is adopted to form joint iterative
ADFE and convolutional decoder (JIED). In JIED scheme, the ADFE
and decoder exchange soft information in an iterative manner, which
can enhance the equalizer performance using decoding gain. The
simulation results show that the proposed algorithm can reduce
computational complexity and improve the performance of equalizer.
Therefore, the performance of coherent underwater acoustic
communications can be improved greatly.
Abstract: A method is presented for obtaining the error probability for block codes. The method is based on the eigenvalueeigenvector properties of the code correlation matrix. It is found that under a unary transformation and for an additive white Gaussian noise environment, the performance evaluation of a block code becomes a one-dimensional problem in which only one eigenvalue and its corresponding eigenvector are needed in the computation. The obtained error rate results show remarkable agreement between simulations and analysis.
Abstract: Decision Feedback equalizers (DFEs) usually outperform linear equalizers for channels with intersymbol interference. However, the DFE performance is highly dependent on the availability of reliable past decisions. Hence, in coded systems, where reliable decisions are only available after decoding the full block, the performance of the DFE will be affected. A symbol based DFE is a DFE that only uses the decision after the block is decoded. In this paper we derive the optimal settings of both the feedforward and feedback taps of the symbol based equalizer. We present a novel symbol based DFE filterbank, and derive its taps optimal settings. We also show that it outperforms the classic DFE in terms of complexity and/or performance.
Abstract: Image coding based on clustering provides immediate
access to targeted features of interest in a high quality decoded
image. This approach is useful for intelligent devices, as well as for
multimedia content-based description standards. The result of image
clustering cannot be precise in some positions especially on pixels
with edge information which produce ambiguity among the clusters.
Even with a good enhancement operator based on PDE, the quality of
the decoded image will highly depend on the clustering process. In
this paper, we introduce an ambiguity cluster in image coding to
represent pixels with vagueness properties. The presence of such
cluster allows preserving some details inherent to edges as well for
uncertain pixels. It will also be very useful during the decoding phase
in which an anisotropic diffusion operator, such as Perona-Malik,
enhances the quality of the restored image. This work also offers a
comparative study to demonstrate the effectiveness of a fuzzy
clustering technique in detecting the ambiguity cluster without losing
lot of the essential image information. Several experiments have been
carried out to demonstrate the usefulness of ambiguity concept in
image compression. The coding results and the performance of the
proposed algorithms are discussed in terms of the peak signal-tonoise
ratio and the quantity of ambiguous pixels.
Abstract: In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.
Abstract: Visual secret sharing (VSS) was proposed by Naor and Shamir in 1995. Visual secret sharing schemes encode a secret image into two or more share images, and single share image can’t obtain any information about the secret image. When superimposes the shares, it can restore the secret by human vision. Due to the traditional VSS have some problems like pixel expansion and the cost of sophisticated. And this method only can encode one secret image. The schemes of encrypting more secret images by random grids into two shares were proposed by Chen et al. in 2008. But when those restored secret images have much distortion, those schemes are almost limited in decoding. In the other words, if there is too much distortion, we can’t encrypt too much information. So, if we can adjust distortion to very small, we can encrypt more secret images. In this paper, four new algorithms which based on Chang et al.’s scheme be held in 2010 are proposed. First algorithm can adjust distortion to very small. Second algorithm distributes the distortion into two restored secret images. Third algorithm achieves no distortion for special secret images. Fourth algorithm encrypts three secret images, which not only retain the advantage of VSS but also improve on the problems of decoding.
Abstract: A new hybrid coding method for compressing
animated polygonal meshes is presented. This paper assumes
the simplistic representation of the geometric data: a temporal
sequence of polygonal meshes for each discrete frame of the
animated sequence. The method utilizes a delta coding and an
octree-based method. In this hybrid method, both the octree
approach and the delta coding approach are applied to each
single frame in the animation sequence in parallel. The
approach that generates the smaller encoded file size is chosen
to encode the current frame. Given the same quality
requirement, the hybrid coding method can achieve much
higher compression ratio than the octree-only method or the
delta-only method. The hybrid approach can represent 3D
animated sequences with higher compression factors while
maintaining reasonable quality. It is easy to implement and have
a low cost encoding process and a fast decoding process, which
make it a better choice for real time application.
Abstract: This paper provides a flexible way of controlling
Variable-Bit-Rate (VBR) of compressed digital video, applicable to
the new H264 video compression standard. The entire video
sequence is assessed in advance and the quantisation level is then set
such that bit rate (and thus the frame rate) remains within
predetermined limits compatible with the bandwidth of the
transmission system and the capabilities of the remote end, while at
the same time providing constant quality similar to VBR encoding.
A process for avoiding buffer starvation by selectively eliminating
frames from the encoded output at times when the frame rate is slow
(large number of bits per frame) will be also described. Finally, the
problem of buffer overflow will be solved by selectively eliminating
frames from the received input to the decoder. The decoder detects
the omission of the frames and resynchronizes the transmission by
monitoring time stamps and repeating frames if necessary.
Abstract: We report in this paper the model adopted by our
system of continuous speech recognition in Arab language SySRA
and the results obtained until now. This system uses the database
Arabdic-10 which is a corpus of word for the Arab language and
which was manually segmented. Phonetic decoding is represented
by an expert system where the knowledge base is translated in the
form of production rules. This expert system transforms a vocal
signal into a phonetic lattice. The higher level of the system takes
care of the recognition of the lattice thus obtained by deferring it in
the form of written sentences (orthographical Form). This level
contains initially the lexical analyzer which is not other than the
module of recognition. We subjected this analyzer to a set of
spectrograms obtained by dictating a score of sentences in Arab
language. The rate of recognition of these sentences is about 70%
which is, to our knowledge, the best result for the recognition of the
Arab language. The test set consists of twenty sentences from four
speakers not having taken part in the training.
Abstract: Encoded information based on synchronization of coupled chaotic Nd:YAG lasers in master-slave configuration is numerically studied. Encoding, transmission, and decoding of information in optical chaotic communication with a single channel is presented. We analyze the robustness of the encrypted audio transmission in a channel noise. In order to illustrate this synchronization robustness, we present two cases of study: synchronization and transmission with a single channel without and with noise in the channel.