Abstract: Test data compression is an efficient method for reducing the test application cost. The problem of reducing test data has been addressed by researchers in three different aspects: Test Data Compression, Built-in-Self-Test (BIST) and Test set compaction. The latter two methods are capable of enhancing fault coverage with cost of hardware overhead. The drawback of the conventional methods is that they are capable of reducing the test storage and test power but when test data have redundant length of runs, no additional compression method is followed. This paper presents a modified Run Length Coding (RLC) technique with Multilevel Selective Huffman Coding (MLSHC) technique to reduce test data volume, test pattern delivery time and power dissipation in scan test applications where redundant length of runs is encountered then the preceding run symbol is replaced with tiny codeword. Experimental results show that the presented method not only improves the test data compression but also reduces the overall test data volume compared to recent schemes. Experiments for the six largest ISCAS-98 benchmarks show that our method outperforms most known techniques.
Abstract: Wireless Body Area Network (WBAN) is a short-range
wireless communication around human body for various applications
such as wearable devices, entertainment, military, and especially
medical devices. WBAN attracts the attention of continuous health
monitoring system including diagnostic procedure, early detection of
abnormal conditions, and prevention of emergency situations.
Compared to cellular network, WBAN system is more difficult to
control inter- and inner-cell interference due to the limited power,
limited calculation capability, mobility of patient, and
non-cooperation among WBANs.
In this paper, we compare the performance of resource allocation
scheme based on several Pseudo Orthogonal Codewords (POCs) to
mitigate inter-WBAN interference. Previously, the POCs are widely
exploited for a protocol sequence and optical orthogonal code. Each
POCs have different properties of auto- and cross-correlation and
spectral efficiency according to its construction of POCs. To identify
different WBANs, several different pseudo orthogonal patterns based
on POCs exploits for resource allocation of WBANs. By simulating
these pseudo orthogonal resource allocations of WBANs on
MATLAB, we obtain the performance of WBANs according to
different POCs and can analyze and evaluate the suitability of POCs
for the resource allocation in the WBANs system.
Abstract: When using modern Code Division Multiple Access (CDMA) in mobile communications, the user must be able to vary the transmission rate of users to allocate bandwidth efficiently. In this work, Orthogonal Variable Spreading Factor (OVSF) codes are used with the same principles applied in a low-rate superorthogonal turbo code due to their variable-length properties. The introduced system is the Variable Rate Superorthogonal Turbo Code (VRSTC) where puncturing is not performed on the encoder’s final output but rather before selecting the output to achieve higher rates. Due to bandwidth expansion, the codes outperform an ordinary turbo code in the AWGN channel. Simulations results show decreased performance compared to those obtained with the employment of Walsh-Hadamard codes. However, with OVSF codes, the VRSTC system keeps the orthogonality of codewords whilst producing variable rate codes contrary to Walsh-Hadamard codes where puncturing is usually performed on the final output.
Abstract: In this paper, a novel multipurpose audio watermarking
algorithm is proposed based on Vector Quantization (VQ) in Discrete
Cosine Transform (DCT) domain using the codeword labeling and
index-bit constrained method. By using this algorithm, it can fulfill the
requirements of both the copyright protection and content integrity
authentication at the same time for the multimedia artworks. The
robust watermark is embedded in the middle frequency coefficients of
the DCT transform during the labeled codeword vector quantization
procedure. The fragile watermark is embedded into the indices of the
high frequency coefficients of the DCT transform by using the
constrained index vector quantization method for the purpose of
integrity authentication of the original audio signals. Both the robust
and the fragile watermarks can be extracted without the original audio
signals, and the simulation results show that our algorithm is effective
with regard to the transparency, robustness and the authentication
requirements
Abstract: In this paper, a block code to minimize the peak-toaverage
power ratio (PAPR) of orthogonal frequency division
multiplexing (OFDM) signals is proposed. It is shown that cyclic
shift and codeword inversion cause not change to peak envelope
power. The encoding rule for the proposed code comprises of
searching for a seed codeword, shifting the register elements, and
determining codeword inversion, eliminating the look-up table for
one-to-one correspondence between the source and the coded data.
Simulation results show that OFDM systems with the proposed code
always have the minimum PAPR.
Abstract: We present a novel construction of 16-QAM codewords of length n = 2k . The number of constructed codewords is 162×[4k-1×k-k+1] . When these constructed codewords are utilized as a code in OFDM systems, their peak-to-mean envelope power ratios (PMEPR) are bounded above by 3.6 . The principle of our scheme is illustrated with a four subcarrier example.
Abstract: In this paper, we evaluate the choice of suitable
quantization characteristics for both the decoder messages and the
received samples in Low Density Parity Check (LDPC) coded
systems using M-QAM (Quadrature Amplitude Modulation)
schemes. The analysis involves the demapper block that provides
initial likelihood values for the decoder, by relating its quantization
strategy of the decoder. A mapping strategy refers to the grouping of
bits within a codeword, where each m-bit group is used to select a
2m-ary signal in accordance with the signal labels. Further we
evaluate the system with mapping strategies like Consecutive-Bit
(CB) and Bit-Reliability (BR). A new demapper version, based on
approximate expressions, is also presented to yield a low complexity
hardware implementation.
Abstract: A novel file splitting technique for the reduction of the nth-order entropy of text files is proposed. The technique is based on mapping the original text file into a non-ASCII binary file using a new codeword assignment method and then the resulting binary file is split into several subfiles each contains one or more bits from each codeword of the mapped binary file. The statistical properties of the subfiles are studied and it is found that they reflect the statistical properties of the original text file which is not the case when the ASCII code is used as a mapper. The nth-order entropy of these subfiles are determined and it is found that the sum of their entropies is less than that of the original text file for the same values of extensions. These interesting statistical properties of the resulting subfiles can be used to achieve better compression ratios when conventional compression techniques are applied to these subfiles individually and on a bit-wise basis rather than on character-wise basis.
Abstract: Classical Bose-Chaudhuri-Hocquenghem (BCH) codes C that contain their dual codes can be used to construct quantum stabilizer codes this chapter studies the properties of such codes. It had been shown that a BCH code of length n which contains its dual code satisfies the bound on weight of any non-zero codeword in C and converse is also true. One impressive difficulty in quantum communication and computation is to protect informationcarrying quantum states against undesired interactions with the environment. To address this difficulty, many good quantum errorcorrecting codes have been derived as binary stabilizer codes. We were able to shed more light on the structure of dual containing BCH codes. These results make it possible to determine the parameters of quantum BCH codes in terms of weight of non-zero dual codeword.
Abstract: This paper presents the H-ARQ techniques comparison for OFDM systems with a new family of non-binary LDPC codes which has been developed within the EU FP7 DAVINCI project. The punctured NB-LDPC codes have been used in a simulated model of the transmission system. The link level performance has been evaluated in terms of spectral efficiency, codeword error rate and average number of retransmissions. The NB-LDPC codes can be easily and effective implemented with different methods of the retransmission needed if correct decoding of a codeword failed. Here the Optimal Symbol Selection method is proposed as a Chase Combining technique.
Abstract: A systematic and exhaustive method based on the group
structure of a unitary Lie algebra is proposed to generate an enormous
number of quantum codes. With respect to the algebraic structure,
the orthogonality condition, which is the central rule of generating
quantum codes, is proved to be fully equivalent to the distinguishability
of the elements in this structure. In addition, four types of
quantum codes are classified according to the relation of the codeword
operators and some initial quantum state. By linking the unitary Lie
algebra with the additive group, the classical correspondences of some
of these quantum codes can be rendered.
Abstract: HSDPA is a new feature which is introduced in
Release-5 specifications of the 3GPP WCDMA/UTRA standard to
realize higher speed data rate together with lower round-trip times.
Moreover, the HSDPA concept offers outstanding improvement of
packet throughput and also significantly reduces the packet call
transfer delay as compared to Release -99 DSCH. Till now the
HSDPA system uses turbo coding which is the best coding technique
to achieve the Shannon limit. However, the main drawbacks of turbo
coding are high decoding complexity and high latency which makes
it unsuitable for some applications like satellite communications,
since the transmission distance itself introduces latency due to
limited speed of light. Hence in this paper it is proposed to use LDPC
coding in place of Turbo coding for HSDPA system which decreases
the latency and decoding complexity. But LDPC coding increases the
Encoding complexity. Though the complexity of transmitter
increases at NodeB, the End user is at an advantage in terms of
receiver complexity and Bit- error rate. In this paper LDPC Encoder
is implemented using “sparse parity check matrix" H to generate a
codeword at Encoder and “Belief Propagation algorithm "for LDPC
decoding .Simulation results shows that in LDPC coding the BER
suddenly drops as the number of iterations increase with a small
increase in Eb/No. Which is not possible in Turbo coding. Also same
BER was achieved using less number of iterations and hence the
latency and receiver complexity has decreased for LDPC coding.
HSDPA increases the downlink data rate within a cell to a theoretical
maximum of 14Mbps, with 2Mbps on the uplink. The changes that
HSDPA enables includes better quality, more reliable and more
robust data services. In other words, while realistic data rates are
only a few Mbps, the actual quality and number of users achieved
will improve significantly.
Abstract: Hand gesture is an active area of research in the vision
community, mainly for the purpose of sign language recognition and
Human Computer Interaction. In this paper, we propose a system to
recognize alphabet characters (A-Z) and numbers (0-9) in real-time
from stereo color image sequences using Hidden Markov Models
(HMMs). Our system is based on three main stages; automatic segmentation
and preprocessing of the hand regions, feature extraction
and classification. In automatic segmentation and preprocessing stage,
color and 3D depth map are used to detect hands where the hand
trajectory will take place in further step using Mean-shift algorithm
and Kalman filter. In the feature extraction stage, 3D combined features
of location, orientation and velocity with respected to Cartesian
systems are used. And then, k-means clustering is employed for
HMMs codeword. The final stage so-called classification, Baum-
Welch algorithm is used to do a full train for HMMs parameters.
The gesture of alphabets and numbers is recognized using Left-Right
Banded model in conjunction with Viterbi algorithm. Experimental
results demonstrate that, our system can successfully recognize hand
gestures with 98.33% recognition rate.
Abstract: Gesture recognition is a challenging task for extracting
meaningful gesture from continuous hand motion. In this paper, we propose an automatic system that recognizes isolated gesture,
in addition meaningful gesture from continuous hand motion for Arabic numbers from 0 to 9 in real-time based on Hidden Markov Models (HMM). In order to handle isolated gesture, HMM using
Ergodic, Left-Right (LR) and Left-Right Banded (LRB) topologies is applied over the discrete vector feature that is extracted from stereo
color image sequences. These topologies are considered to different
number of states ranging from 3 to 10. A new system is developed to recognize the meaningful gesture based on zero-codeword detection
with static velocity motion for continuous gesture. Therefore, the
LRB topology in conjunction with Baum-Welch (BW) algorithm for
training and forward algorithm with Viterbi path for testing presents the best performance. Experimental results show that the proposed system can successfully recognize isolated and meaningful gesture and achieve average rate recognition 98.6% and 94.29% respectively.
Abstract: In present communication, we have developed the
suitable constraints for the given the mean codeword length and the
measures of entropy. This development has proved that Renyi-s
entropy gives the minimum value of the log of the harmonic mean
and the log of power mean. We have also developed an important
relation between best 1:1 code and the uniquely decipherable code by
using different measures of entropy.
Abstract: The objective of the present communication is to
develop new genuine exponentiated mean codeword lengths and to
study deeply the problem of correspondence between well known
measures of entropy and mean codeword lengths. With the help of
some standard measures of entropy, we have illustrated such a
correspondence. In literature, we usually come across many
inequalities which are frequently used in information theory.
Keeping this idea in mind, we have developed such inequalities via
coding theory approach.