Abstract: In this study we focus on improvement performance
of a cue based Motor Imagery Brain Computer Interface (BCI). For
this purpose, data fusion approach is used on results of different
classifiers to make the best decision. At first step Distinction
Sensitive Learning Vector Quantization method is used as a feature
selection method to determine most informative frequencies in
recorded signals and its performance is evaluated by frequency
search method. Then informative features are extracted by packet
wavelet transform. In next step 5 different types of classification
methods are applied. The methodologies are tested on BCI
Competition II dataset III, the best obtained accuracy is 85% and the
best kappa value is 0.8. At final step ordered weighted averaging
(OWA) method is used to provide a proper aggregation classifiers
outputs. Using OWA enhanced system accuracy to 95% and kappa
value to 0.9. Applying OWA just uses 50 milliseconds for
performing calculation.
Abstract: The binary phase-only filter digital watermarking
embeds the phase information of the discrete Fourier transform of the
image into the corresponding magnitudes for better image authentication.
The paper proposed an approach of how to implement watermark
embedding by quantizing the magnitude, with discussing how to
regulate the quantization steps based on the frequencies of the magnitude
coefficients of the embedded watermark, and how to embed the
watermark at low frequency quantization. The theoretical analysis and
simulation results show that algorithm flexibility, security, watermark
imperceptibility and detection performance of the binary phase-only
filter digital watermarking can be effectively improved with quantization
based watermark embedding, and the robustness against JPEG
compression will also be increased to some extent.
Abstract: Although the level crossing concept has been the subject of intensive investigation over the last few years, certain problems of great interest remain unsolved. One of these concern is distribution of threshold levels. This paper presents a new threshold level allocation schemes for level crossing based on nonuniform sampling. Intuitively, it is more reasonable if the information rich regions of the signal are sampled finer and those with sparse information are sampled coarser. To achieve this objective, we propose non-linear quantization functions which dynamically assign the number of quantization levels depending on the importance of the given amplitude range. Two new approaches to determine the importance of the given amplitude segment are presented. The proposed methods are based on exponential and logarithmic functions. Various aspects of proposed techniques are discussed and experimentally validated. Its efficacy is investigated by comparison with uniform sampling.
Abstract: In this paper, we evaluate the choice of suitable
quantization characteristics for both the decoder messages and the
received samples in Low Density Parity Check (LDPC) coded
systems using M-QAM (Quadrature Amplitude Modulation)
schemes. The analysis involves the demapper block that provides
initial likelihood values for the decoder, by relating its quantization
strategy of the decoder. A mapping strategy refers to the grouping of
bits within a codeword, where each m-bit group is used to select a
2m-ary signal in accordance with the signal labels. Further we
evaluate the system with mapping strategies like Consecutive-Bit
(CB) and Bit-Reliability (BR). A new demapper version, based on
approximate expressions, is also presented to yield a low complexity
hardware implementation.
Abstract: A conventional image posterization method
occasionally fails to preserve the shape and color of objects due to the
uneffective color reduction. This paper proposes a new image
posterizartion method by using modified color quantization for
preserving the shape and color of objects and color contrast
enhancement for improving lightness contrast and saturation.
Experiment results show that our proposed method can provide
visually more satisfactory posterization result than that of the
conventional method.
Abstract: To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Abstract: In this paper, image compression using hybrid vector
quantization scheme such as Multistage Vector Quantization
(MSVQ) and Pyramid Vector Quantization (PVQ) are introduced. A
combined MSVQ and PVQ are utilized to take advantages provided
by both of them. In the wavelet decomposition of the image, most of
the information often resides in the lowest frequency subband.
MSVQ is applied to significant low frequency coefficients. PVQ is
utilized to quantize the coefficients of other high frequency
subbands. The wavelet coefficients are derived using lifting scheme.
The main aim of the proposed scheme is to achieve high compression
ratio without much compromise in the image quality. The results are
compared with the existing image compression scheme using MSVQ.
Abstract: Starting from a biologically inspired framework, Gabor filters were built up from retinal filters via LMSE algorithms. Asubset of retinal filter kernels was chosen to form a particular Gabor filter by using a weighted sum. One-dimensional optimization approaches were shown to be inappropriate for the problem. All model parameters were fixed with biological or image processing constraints. Detailed analysis of the optimization procedure led to the introduction of a minimization constraint. Finally, quantization of weighting factors was investigated. This resulted in an optimized cascaded structure of a Gabor filter bank implementation with lower computational cost.
Abstract: In this paper, a novel approach is presented
for designing multiplier-free state-space digital filters. The
multiplier-free design is obtained by finding power-of-2 coefficients
and also quantizing the state variables to power-of-2
numbers. Expressions for the noise variance are derived for the
quantized state vector and the output of the filter. A “structuretransformation
matrix" is incorporated in these expressions. It
is shown that quantization effects can be minimized by properly
designing the structure-transformation matrix. Simulation
results are very promising and illustrate the design algorithm.
Abstract: In this paper we propose segmentation approach based
on Vector Quantization technique. Here we have used Kekre-s fast
codebook generation algorithm for segmenting low-altitude aerial
image. This is used as a preprocessing step to form segmented
homogeneous regions. Further to merge adjacent regions color
similarity and volume difference criteria is used. Experiments
performed with real aerial images of varied nature demonstrate that
this approach does not result in over segmentation or under
segmentation. The vector quantization seems to give far better results
as compared to conventional on-the-fly watershed algorithm.
Abstract: A low bit rate still image compression scheme by
compressing the indices of Vector Quantization (VQ) and generating
residual codebook is proposed. The indices of VQ are compressed by
exploiting correlation among image blocks, which reduces the bit per
index. A residual codebook similar to VQ codebook is generated that
represents the distortion produced in VQ. Using this residual
codebook the distortion in the reconstructed image is removed,
thereby increasing the image quality. Our scheme combines these two
methods. Experimental results on standard image Lena show that our
scheme can give a reconstructed image with a PSNR value of 31.6 db
at 0.396 bits per pixel. Our scheme is also faster than the existing VQ
variants.
Abstract: Information hiding for authenticating and verifying the content integrity of the multimedia has been exploited extensively in the last decade. We propose the idea of using genetic algorithm and non-deterministic dependence by involving the un-watermarkable coefficients for digital image authentication. Genetic algorithm is used to intelligently select coefficients for watermarking in a DCT based image authentication scheme, which implicitly watermark all the un-watermarkable coefficients also, in order to thwart different attacks. Experimental results show that such intelligent selection results in improvement of imperceptibility of the watermarked image, and implicit watermarking of all the coefficients improves security against attacks such as cover-up, vector quantization and transplantation.
Abstract: Vector quantization is a powerful tool for speech
coding applications. This paper deals with LPC Coding of speech
signals which uses a new technique called Multi Switched Split
Vector Quantization, This is a hybrid of two product code vector
quantization techniques namely the Multi stage vector quantization
technique, and Switched split vector quantization technique,. Multi
Switched Split Vector Quantization technique quantizes the linear
predictive coefficients in terms of line spectral frequencies. From
results it is proved that Multi Switched Split Vector Quantization
provides better trade off between bitrate and spectral distortion
performance, computational complexity and memory requirements
when compared to Switched Split Vector Quantization, Multi stage
vector quantization, and Split Vector Quantization techniques. By
employing the switching technique at each stage of the vector
quantizer the spectral distortion, computational complexity and
memory requirements were greatly reduced. Spectral distortion was
measured in dB, Computational complexity was measured in
floating point operations (flops), and memory requirements was
measured in (floats).
Abstract: In this paper we use exponential particle swarm
optimization (EPSO) to cluster data. Then we compare between
(EPSO) clustering algorithm which depends on exponential variation
for the inertia weight and particle swarm optimization (PSO)
clustering algorithm which depends on linear inertia weight. This
comparison is evaluated on five data sets. The experimental results
show that EPSO clustering algorithm increases the possibility to find
the optimal positions as it decrease the number of failure. Also show
that (EPSO) clustering algorithm has a smaller quantization error
than (PSO) clustering algorithm, i.e. (EPSO) clustering algorithm
more accurate than (PSO) clustering algorithm.
Abstract: A special case of floating point data representation is block
floating point format where a block of operands are forced to have a joint
exponent term. This paper deals with the finite wordlength properties of
this data format. The theoretical errors associated with the error model for
block floating point quantization process is investigated with the help of error
distribution functions. A fast and easy approximation formula for calculating
signal-to-noise ratio in quantization to block floating point format is derived.
This representation is found to be a useful compromise between fixed point
and floating point format due to its acceptable numerical error properties over
a wide dynamic range.
Abstract: In this paper, we proposed a method to reduce
quantization error. In order to reduce quantization error, low pass
filtering is applied on neighboring samples of current block in
H.264/AVC. However, it has a weak point that low pass filtering is
performed regardless of prediction direction. Since it doesn-t consider
prediction direction, it may not reduce quantization error effectively.
Proposed method considers prediction direction for low pass filtering
and uses a threshold condition for reducing flag bit. We compare our
experimental result with conventional method in H.264/AVC and we
can achieve the average bit-rate reduction of 1.534% by applying the
proposed method. Bit-rate reduction between 0.580% and 3.567% are
shown for experimental results.
Abstract: According to investigating impact of complexity of
stereoscopic frame pairs on stereoscopic video coding and
transmission, a new rate control algorithm is presented. The proposed
rate control algorithm is performed on three levels: stereoscopic group
of pictures (SGOP) level, stereoscopic frame (SFrame) level and
frame level. A temporal-spatial frame complexity model is firstly
established, in the bits allocation stage, the frame complexity, position
significance and reference property between the left and right frames
are taken into account. Meanwhile, the target buffer is set according to
the frame complexity. Experimental results show that the proposed
method can efficiently control the bitrates, and it outperforms the fixed
quantization parameter method from the rate distortion perspective,
and average PSNR gain between rate-distortion curves (BDPSNR) is
0.21dB.
Abstract: In this paper, we propose a novel fast search algorithm for short MPEG video clips from video database. This algorithm is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Instead of fully decompressed video frames, partially decoded data, namely DC images are utilized. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 3 % is achieved, which is more accurately and robust than conventional fast video search algorithm.
Abstract: This paper presents a new fingerprint coding technique
based on contourlet transform and multistage vector quantization.
Wavelets have shown their ability in representing natural images that
contain smooth areas separated with edges. However, wavelets
cannot efficiently take advantage of the fact that the edges usually
found in fingerprints are smooth curves. This issue is addressed by
directional transforms, known as contourlets, which have the
property of preserving edges. The contourlet transform is a new
extension to the wavelet transform in two dimensions using
nonseparable and directional filter banks. The computation and
storage requirements are the major difficulty in implementing a
vector quantizer. In the full-search algorithm, the computation and
storage complexity is an exponential function of the number of bits
used in quantizing each frame of spectral information. The storage
requirement in multistage vector quantization is less when compared
to full search vector quantization. The coefficients of contourlet
transform are quantized by multistage vector quantization. The
quantized coefficients are encoded by Huffman coding. The results
obtained are tabulated and compared with the existing wavelet based
ones.
Abstract: We study the performance of compressed beamforming
weights feedback technique in generalized triangular decomposition
(GTD) based MIMO system. GTD is a beamforming technique that
enjoys QoS flexibility. The technique, however, will perform at its
optimum only when the full knowledge of channel state information
(CSI) is available at the transmitter. This would be impossible in
the real system, where there are channel estimation error and limited
feedback. We suggest a way to implement the quantized beamforming
weights feedback, which can significantly reduce the feedback data,
on GTD-based MIMO system and investigate the performance of
the system. Interestingly, we found that compressed beamforming
weights feedback does not degrade the BER performance of the
system at low input power, while the channel estimation error
and quantization do. For comparison, GTD is more sensitive to
compression and quantization, while SVD is more sensitive to the
channel estimation error. We also explore the performance of GTDbased
MU-MIMO system, and find that the BER performance starts
to degrade largely at around -20 dB channel estimation error.