Abstract: In the context of the handwriting recognition, we
propose an off line system for the recognition of the Arabic
handwritten words of the Algerian departments. The study is based
mainly on the evaluation of neural network performances, trained
with the gradient back propagation algorithm. The used parameters to
form the input vector of the neural network are extracted on the
binary images of the handwritten word by several methods. The
Distribution parameters, the centered moments of the different
projections of the different segments, the centered moments of the
word image coding according to the directions of Freeman, and the
Barr features applied binary image of the word and on its different
segments. The classification is achieved by a multi layers perceptron.
A detailed experiment is carried and satisfactory recognition results
are reported.
Abstract: Medical imaging produces human body pictures in
digital form. Since these imaging techniques produce prohibitive
amounts of data, compression is necessary for storage and
communication purposes. Many current compression schemes
provide a very high compression rate but with considerable loss of
quality. On the other hand, in some areas in medicine, it may be
sufficient to maintain high image quality only in region of interest
(ROI). This paper discusses a contribution to the lossless
compression in the region of interest of Scintigraphic images based
on SPIHT algorithm and global transform thresholding using
Huffman coding.
Abstract: In this paper we introduce an effective ECG compression algorithm based on two dimensional multiwavelet transform. Multiwavelets offer simultaneous orthogonality, symmetry and short support, which is not possible with scalar two-channel wavelet systems. These features are known to be important in signal processing. Thus multiwavelet offers the possibility of superior performance for image processing applications. The SPIHT algorithm has achieved notable success in still image coding. We suggested applying SPIHT algorithm to 2-D multiwavelet transform of2-D arranged ECG signals. Experiments on selected records of ECG from MIT-BIH arrhythmia database revealed that the proposed algorithm is significantly more efficient in comparison with previously proposed ECG compression schemes.
Abstract: Image compression can improve the performance of
the digital systems by reducing time and cost in image storage
and transmission without significant reduction of the image quality.
Furthermore, the discrete cosine transform has emerged as the new
state-of-the art standard for image compression. In this paper, a
hybrid image compression technique based on reversible blockade
transform coding is proposed. The technique, implemented over
regions of interest (ROIs), is based on selection of the coefficients
that belong to different transforms, depending on the coefficients is
proposed. This method allows: (1) codification of multiple kernals
at various degrees of interest, (2) arbitrary shaped spectrum,and (3)
flexible adjustment of the compression quality of the image and the
background. No standard modification for JPEG2000 decoder was
required. The method was applied over different types of images.
Results show a better performance for the selected regions, when
image coding methods were employed for the whole set of images.
We believe that this method is an excellent tool for future image
compression research, mainly on images where image coding can
be of interest, such as the medical imaging modalities and several
multimedia applications. Finally VLSI implementation of proposed
method is shown. It is also shown that the kernal of Hartley and
Cosine transform gives the better performance than any other model.
Abstract: In the paper an effective context based lossless coding
technique is presented. Three principal and few auxiliary contexts are
defined. The predictor adaptation technique is an improved CoBALP
algorithm, denoted CoBALP+. Cumulated predictor error combining
8 bias estimators is calculated. It is shown experimentally that
indeed, the new technique is time-effective while it outperforms the
well known methods having reasonable time complexity, and is
inferior only to extremely computationally complex ones.
Abstract: In the framework of the image compression by
Wavelet Transforms, we propose a perceptual method by
incorporating Human Visual System (HVS) characteristics in the
quantization stage. Indeed, human eyes haven-t an equal sensitivity
across the frequency bandwidth. Therefore, the clarity of the
reconstructed images can be improved by weighting the quantization
according to the Contrast Sensitivity Function (CSF). The visual
artifact at low bit rate is minimized. To evaluate our method, we use
the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria
witch takes into account visual criteria. The experimental results
illustrate that our technique shows improvement on image quality at
the same compression ratio.
Abstract: In this paper, a near lossless image coding scheme
based on Orthogonal Polynomials Transform (OPT) has been
presented. The polynomial operators and polynomials basis operators
are obtained from set of orthogonal polynomials functions for the
proposed transform coding. The image is partitioned into a number of
distinct square blocks and the proposed transform coding is applied to
each of these individually. After applying the proposed transform
coding, the transformed coefficients are rearranged into a sub-band
structure. The Embedded Zerotree (EZ) coding algorithm is then
employed to quantize the coefficients. The proposed transform is
implemented for various block sizes and the performance is
compared with existing Discrete Cosine Transform (DCT) transform
coding scheme.
Abstract: In this paper we present a novel approach for face image coding. The proposed method makes a use of the features of video encoders like motion prediction. At first encoder selects appropriate prototype from the database and warps it according to features of encoding face. Warped prototype is placed as first I frame. Encoding face is placed as second frame as P frame type. Information about features positions, color change, selected prototype and data flow of P frame will be sent to decoder. The condition is both encoder and decoder own the same database of prototypes. We have run experiment with H.264 video encoder and obtained results were compared to results achieved by JPEG and JPEG2000. Obtained results show that our approach is able to achieve 3 times lower bitrate and two times higher PSNR in comparison with JPEG. According to comparison with JPEG2000 the bitrate was very similar, but subjective quality achieved by proposed method is better.
Abstract: The existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties such as orthogonality, short support, linear phase symmetry, and a high order of approximation through vanishing moments simultaneously, which are very much essential for signal processing. New class of wavelets called 'Multiwavelets' which posses more than one scaling function overcomes this problem. This paper presents a new image coding scheme based on non linear approximation of multiwavelet coefficients along with multistage vector quantization. The performance of the proposed scheme is compared with the results obtained from scalar wavelets.
Abstract: Image coding based on clustering provides immediate
access to targeted features of interest in a high quality decoded
image. This approach is useful for intelligent devices, as well as for
multimedia content-based description standards. The result of image
clustering cannot be precise in some positions especially on pixels
with edge information which produce ambiguity among the clusters.
Even with a good enhancement operator based on PDE, the quality of
the decoded image will highly depend on the clustering process. In
this paper, we introduce an ambiguity cluster in image coding to
represent pixels with vagueness properties. The presence of such
cluster allows preserving some details inherent to edges as well for
uncertain pixels. It will also be very useful during the decoding phase
in which an anisotropic diffusion operator, such as Perona-Malik,
enhances the quality of the restored image. This work also offers a
comparative study to demonstrate the effectiveness of a fuzzy
clustering technique in detecting the ambiguity cluster without losing
lot of the essential image information. Several experiments have been
carried out to demonstrate the usefulness of ambiguity concept in
image compression. The coding results and the performance of the
proposed algorithms are discussed in terms of the peak signal-tonoise
ratio and the quantity of ambiguous pixels.
Abstract: In this paper we propose a Multiple Description Image Coding(MDIC) scheme to generate two compressed and balanced rates descriptions in the wavelet domain (Daubechies biorthogonal (9, 7) wavelet) using pairwise correlating transform optimal and application method for Generalized Multiple Description Coding (GMDC) to image coding in the wavelet domain. The GMDC produces statistically correlated streams such that lost streams can be estimated from the received data. Our performance test shown that the proposed method gives more improvement and good quality of the reconstructed image when the wavelet coefficients are normalized by Gaussian Scale Mixture (GSM) model then the Gaussian one ,.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Abstract: Large volumes of fingerprints are collected and stored
every day in a wide range of applications, including forensics, access
control etc. It is evident from the database of Federal Bureau of
Investigation (FBI) which contains more than 70 million finger
prints. Compression of this database is very important because of this
high Volume. The performance of existing image coding standards
generally degrades at low bit-rates because of the underlying block
based Discrete Cosine Transform (DCT) scheme. Over the past
decade, the success of wavelets in solving many different problems
has contributed to its unprecedented popularity. Due to
implementation constraints scalar wavelets do not posses all the
properties which are needed for better performance in compression.
New class of wavelets called 'Multiwavelets' which posses more
than one scaling filters overcomes this problem. The objective of this
paper is to develop an efficient compression scheme and to obtain
better quality and higher compression ratio through multiwavelet
transform and embedded coding of multiwavelet coefficients through
Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm.
A comparison of the best known multiwavelets is made to the best
known scalar wavelets. Both quantitative and qualitative measures of
performance are examined for Fingerprints.
Abstract: In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.
Abstract: A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.
Abstract: In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Abstract: Electrocardiogram (ECG) data compression algorithm
is needed that will reduce the amount of data to be transmitted, stored
and analyzed, but without losing the clinical information content. A
wavelet ECG data codec based on the Set Partitioning In Hierarchical
Trees (SPIHT) compression algorithm is proposed in this paper. The
SPIHT algorithm has achieved notable success in still image coding.
We modified the algorithm for the one-dimensional (1-D) case and
applied it to compression of ECG data.
By this compression method, small percent root mean square
difference (PRD) and high compression ratio with low
implementation complexity are achieved. Experiments on selected
records from the MIT-BIH arrhythmia database revealed that the
proposed codec is significantly more efficient in compression and in
computation than previously proposed ECG compression schemes.
Compression ratios of up to 48:1 for ECG signals lead to acceptable
results for visual inspection.
Abstract: In this paper, we evaluate the performance of some wavelet based coding algorithms such as 3D QT-L, 3D SPIHT and JPEG2K. In the first step we achieve an objective comparison between three coders, namely 3D SPIHT, 3D QT-L and JPEG2K. For this purpose, eight MRI head scan test sets of 256 x 256x124 voxels have been used. Results show superior performance of 3D SPIHT algorithm, whereas 3D QT-L outperforms JPEG2K. The second step consists of evaluating the robustness of 3D SPIHT and JPEG2K coding algorithm over wireless transmission. Compressed dataset images are then transmitted over AWGN wireless channel or over Rayleigh wireless channel. Results show the superiority of JPEG2K over these two models. In fact, it has been deduced that JPEG2K is more robust regarding coding errors. Thus we may conclude the necessity of using corrector codes in order to protect the transmitted medical information.