Abstract: In this paper, a novel method for a biometric system based on the ECG signal is proposed, using spectral coefficients computed through linear predictive coding (LPC). ECG biometric systems have traditionally incorporated characteristics of fiducial points of the ECG signal as the feature set. These systems have been shown to contain loopholes and thus a non-fiducial system allows for tighter security. In the proposed system, incorporating non-fiducial features from the LPC spectrum produced a segment and subject recognition rate of 99.52% and 100% respectively. The recognition rates outperformed the biometric system that is based on the wavelet packet decomposition (WPD) algorithm in terms of recognition rates and computation time. This allows for LPC to be used in a practical ECG biometric system that requires fast, stringent and accurate recognition.
Abstract: In this paper we propose segmentation approach based
on Vector Quantization technique. Here we have used Kekre-s fast
codebook generation algorithm for segmenting low-altitude aerial
image. This is used as a preprocessing step to form segmented
homogeneous regions. Further to merge adjacent regions color
similarity and volume difference criteria is used. Experiments
performed with real aerial images of varied nature demonstrate that
this approach does not result in over segmentation or under
segmentation. The vector quantization seems to give far better results
as compared to conventional on-the-fly watershed algorithm.
Abstract: In this paper we present simulation results for the
application of a bandwidth efficient algorithm (mapping algorithm)
to an image transmission system. This system considers three
different real valued transforms to generate energy compact
coefficients. First results are presented for gray scale and color image
transmission in the absence of noise. It is seen that the system
performs its best when discrete cosine transform is used. Also the
performance of the system is dominated more by the size of the
transform block rather than the number of coefficients transmitted or
the number of bits used to represent each coefficient. Similar results
are obtained in the presence of additive white Gaussian noise. The
varying values of the bit error rate have very little or no impact on
the performance of the algorithm. Optimum results are obtained for
the system considering 8x8 transform block and by transmitting 15
coefficients from each block using 8 bits.
Abstract: Many digital signal processing, techniques have been used to automatically distinguish protein coding regions (exons) from non-coding regions (introns) in DNA sequences. In this work, we have characterized these sequences according to their nonlinear dynamical features such as moment invariants, correlation dimension, and largest Lyapunov exponent estimates. We have applied our model to a number of real sequences encoded into a time series using EIIP sequence indicators. In order to discriminate between coding and non coding DNA regions, the phase space trajectory was first reconstructed for coding and non-coding regions. Nonlinear dynamical features are extracted from those regions and used to investigate a difference between them. Our results indicate that the nonlinear dynamical characteristics have yielded significant differences between coding (CR) and non-coding regions (NCR) in DNA sequences. Finally, the classifier is tested on real genes where coding and non-coding regions are well known.
Abstract: The vertex connectivity of a graph is the smallest number of vertices whose deletion separates the graph or makes it trivial. This work is devoted to the problem of vertex connectivity test of graphs in a distributed environment based on a general and a constructive approach. The contribution of this paper is threefold. First, using a preconstructed spanning tree of the considered graph, we present a protocol to test whether a given graph is 2-connected using only local knowledge. Second, we present an encoding of this protocol using graph relabeling systems. The last contribution is the implementation of this protocol in the message passing model. For a given graph G, where M is the number of its edges, N the number of its nodes and Δ is its degree, our algorithms need the following requirements: The first one uses O(Δ×N2) steps and O(Δ×logΔ) bits per node. The second one uses O(Δ×N2) messages, O(N2) time and O(Δ × logΔ) bits per node. Furthermore, the studied network is semi-anonymous: Only the root of the pre-constructed spanning tree needs to be identified.
Abstract: In this paper we present a novel approach for wavelet compression of electrocardiogram (ECG) signals based on the set partitioning in hierarchical trees (SPIHT) coding algorithm. SPIHT algorithm has achieved prominent success in image compression. Here we use a modified version of SPIHT for one dimensional signals. We applied wavelet transform with SPIHT coding algorithm on different records of MIT-BIH database. The results show the high efficiency of this method in ECG compression.
Abstract: A number of competing methodologies have been developed
to identify genes and classify DNA sequences into coding
and non-coding sequences. This classification process is fundamental
in gene finding and gene annotation tools and is one of the most
challenging tasks in bioinformatics and computational biology. An
information theory measure based on mutual information has shown
good accuracy in classifying DNA sequences into coding and noncoding.
In this paper we describe a species independent iterative
approach that distinguishes coding from non-coding sequences using
the mutual information measure (MIM). A set of sixty prokaryotes is
used to extract universal training data. To facilitate comparisons with
the published results of other researchers, a test set of 51 bacterial
and archaeal genomes was used to evaluate MIM. These results
demonstrate that MIM produces superior results while remaining
species independent.
Abstract: In this paper we have proposed three and two
stage still gray scale image compressor based on BTC. In our
schemes, we have employed a combination of four techniques
to reduce the bit rate. They are quad tree segmentation, bit
plane omission, bit plane coding using 32 visual patterns and
interpolative bit plane coding. The experimental results show
that the proposed schemes achieve an average bit rate of 0.46
bits per pixel (bpp) for standard gray scale images with an
average PSNR value of 30.25, which is better than the results
from the exiting similar methods based on BTC.
Abstract: A new hybrid coding method for compressing
animated polygonal meshes is presented. This paper assumes
the simplistic representation of the geometric data: a temporal
sequence of polygonal meshes for each discrete frame of the
animated sequence. The method utilizes a delta coding and an
octree-based method. In this hybrid method, both the octree
approach and the delta coding approach are applied to each
single frame in the animation sequence in parallel. The
approach that generates the smaller encoded file size is chosen
to encode the current frame. Given the same quality
requirement, the hybrid coding method can achieve much
higher compression ratio than the octree-only method or the
delta-only method. The hybrid approach can represent 3D
animated sequences with higher compression factors while
maintaining reasonable quality. It is easy to implement and have
a low cost encoding process and a fast decoding process, which
make it a better choice for real time application.
Abstract: This paper provides a flexible way of controlling
Variable-Bit-Rate (VBR) of compressed digital video, applicable to
the new H264 video compression standard. The entire video
sequence is assessed in advance and the quantisation level is then set
such that bit rate (and thus the frame rate) remains within
predetermined limits compatible with the bandwidth of the
transmission system and the capabilities of the remote end, while at
the same time providing constant quality similar to VBR encoding.
A process for avoiding buffer starvation by selectively eliminating
frames from the encoded output at times when the frame rate is slow
(large number of bits per frame) will be also described. Finally, the
problem of buffer overflow will be solved by selectively eliminating
frames from the received input to the decoder. The decoder detects
the omission of the frames and resynchronizes the transmission by
monitoring time stamps and repeating frames if necessary.
Abstract: Vector quantization is a powerful tool for speech
coding applications. This paper deals with LPC Coding of speech
signals which uses a new technique called Multi Switched Split
Vector Quantization, This is a hybrid of two product code vector
quantization techniques namely the Multi stage vector quantization
technique, and Switched split vector quantization technique,. Multi
Switched Split Vector Quantization technique quantizes the linear
predictive coefficients in terms of line spectral frequencies. From
results it is proved that Multi Switched Split Vector Quantization
provides better trade off between bitrate and spectral distortion
performance, computational complexity and memory requirements
when compared to Switched Split Vector Quantization, Multi stage
vector quantization, and Split Vector Quantization techniques. By
employing the switching technique at each stage of the vector
quantizer the spectral distortion, computational complexity and
memory requirements were greatly reduced. Spectral distortion was
measured in dB, Computational complexity was measured in
floating point operations (flops), and memory requirements was
measured in (floats).
Abstract: We report in this paper the model adopted by our
system of continuous speech recognition in Arab language SySRA
and the results obtained until now. This system uses the database
Arabdic-10 which is a corpus of word for the Arab language and
which was manually segmented. Phonetic decoding is represented
by an expert system where the knowledge base is translated in the
form of production rules. This expert system transforms a vocal
signal into a phonetic lattice. The higher level of the system takes
care of the recognition of the lattice thus obtained by deferring it in
the form of written sentences (orthographical Form). This level
contains initially the lexical analyzer which is not other than the
module of recognition. We subjected this analyzer to a set of
spectrograms obtained by dictating a score of sentences in Arab
language. The rate of recognition of these sentences is about 70%
which is, to our knowledge, the best result for the recognition of the
Arab language. The test set consists of twenty sentences from four
speakers not having taken part in the training.
Abstract: In this paper we propose a Multiple Description Image Coding(MDIC) scheme to generate two compressed and balanced rates descriptions in the wavelet domain (Daubechies biorthogonal (9, 7) wavelet) using pairwise correlating transform optimal and application method for Generalized Multiple Description Coding (GMDC) to image coding in the wavelet domain. The GMDC produces statistically correlated streams such that lost streams can be estimated from the received data. Our performance test shown that the proposed method gives more improvement and good quality of the reconstructed image when the wavelet coefficients are normalized by Gaussian Scale Mixture (GSM) model then the Gaussian one ,.
Abstract: In this paper, we address the problem of reducing the
switching activity (SA) in on-chip buses through the use of a bus
binding technique in high-level synthesis. While many binding
techniques to reduce the SA exist, we present yet another technique for
further reducing the switching activity. Our proposed method
combines bus binding and data sequence reordering to explore a wider
solution space. The problem is formulated as a multiple traveling
salesman problem and solved using simulated annealing technique.
The experimental results revealed that a binding solution obtained
with the proposed method reduces 5.6-27.2% (18.0% on average) and
2.6-12.7% (6.8% on average) of the switching activity when compared
with conventional binding-only and hybrid binding-encoding
methods, respectively.
Abstract: The protection of the contents of digital products is
referred to as content authentication. In some applications, to be able
to authenticate a digital product could be extremely essential. For
example, if a digital product is used as a piece of evidence in the
court, its integrity could mean life or death of the accused. Generally,
the problem of content authentication can be solved using semifragile
digital watermarking techniques. Recently many authors have
proposed Computer Generated Hologram Watermarking (CGHWatermarking)
techniques. Starting from these studies, in this paper
a semi-fragile Computer Generated Hologram coding technique is
proposed, which is able to detect malicious tampering while
tolerating some incidental distortions. The proposed technique uses
as watermark an encrypted image, and it is well suitable for digital
image authentication.
Abstract: NFκB is a transcription factor regulating many
function of the vessel wall. In the normal condition , NFκB is
revealed diffuse cytoplasmic expressionsuggesting that the system is
inactive. The presence of activation NFκB provide a potential
pathway for the rapid transcriptional of a variety of genes encoding
cytokines, growth factors, adhesion molecules and procoagulatory
factors. It is likely to play an important role in chronic inflamatory
disease involved atherosclerosis. There are many stimuli with the
potential to active NFκB, including hyperlipidemia. We used 24 mice
which was divided in 6 groups. The HFD given by et libitum
procedure during 2, 4, and 6 months. The parameters in this study
were the amount of NFKB activation ,H2O2 as ROS and VCAM-1 as
a product of NFKB activation. H2O2 colorimetryc assay performed
directly using Anti Rat H2O2 ELISA Kit. The NFKB and VCAM-1
detection obtained from aorta mice, measured by ELISA kit and
imunohistochemistry. There was a significant difference activation of
H2O2, NFKB and VCAM-1 level at induce HFD after 2, 4 and 6
months. It suggest that HFD induce ROS formation and increase the
activation of NFKB as one of atherosclerosis marker that caused by
hyperlipidemia as classical atheroschlerosis risk factor.
Abstract: In this paper, we proposed a method to reduce
quantization error. In order to reduce quantization error, low pass
filtering is applied on neighboring samples of current block in
H.264/AVC. However, it has a weak point that low pass filtering is
performed regardless of prediction direction. Since it doesn-t consider
prediction direction, it may not reduce quantization error effectively.
Proposed method considers prediction direction for low pass filtering
and uses a threshold condition for reducing flag bit. We compare our
experimental result with conventional method in H.264/AVC and we
can achieve the average bit-rate reduction of 1.534% by applying the
proposed method. Bit-rate reduction between 0.580% and 3.567% are
shown for experimental results.
Abstract: Encoded information based on synchronization of coupled chaotic Nd:YAG lasers in master-slave configuration is numerically studied. Encoding, transmission, and decoding of information in optical chaotic communication with a single channel is presented. We analyze the robustness of the encrypted audio transmission in a channel noise. In order to illustrate this synchronization robustness, we present two cases of study: synchronization and transmission with a single channel without and with noise in the channel.
Abstract: A sequential decision problem, based on the task ofidentifying the species of trees given acoustic echo data collectedfrom them, is considered with well-known stochastic classifiers,including single and mixture Gaussian models. Echoes are processedwith a preprocessing stage based on a model of mammalian cochlearfiltering, using a new discrete low-pass filter characteristic. Stoppingtime performance of the sequential decision process is evaluated andcompared. It is observed that the new low pass filter processingresults in faster sequential decisions.
Abstract: According to investigating impact of complexity of
stereoscopic frame pairs on stereoscopic video coding and
transmission, a new rate control algorithm is presented. The proposed
rate control algorithm is performed on three levels: stereoscopic group
of pictures (SGOP) level, stereoscopic frame (SFrame) level and
frame level. A temporal-spatial frame complexity model is firstly
established, in the bits allocation stage, the frame complexity, position
significance and reference property between the left and right frames
are taken into account. Meanwhile, the target buffer is set according to
the frame complexity. Experimental results show that the proposed
method can efficiently control the bitrates, and it outperforms the fixed
quantization parameter method from the rate distortion perspective,
and average PSNR gain between rate-distortion curves (BDPSNR) is
0.21dB.