Abstract: This paper presents a new Quality-Controlled, wavelet based, compression method for electrocardiogram (ECG) signals. Initially, an ECG signal is decomposed using the wavelet transform. Then, the resulting coefficients are iteratively thresholded to guarantee that a predefined goal percent root mean square difference (GPRD) is matched within tolerable boundaries. The quantization strategy of extracted non-zero wavelet coefficients (NZWC), according to the combination of RLE, HUFFMAN and arithmetic encoding of the NZWC and a resulting look up table, allow the accomplishment of high compression ratios with good quality reconstructed signals.
Abstract: This paper aims to present the design, fabrication and test of a novel piezoelectric actuated, check-valves embedded micropump having the advantages of miniature size, light weight and low power consumption. This device is designed to pump gases and liquids with the capability of performing the self-priming and bubble-tolerant work mode by maximizing the stroke volume of the membrane as well as the compression ratio via minimization of the dead volume of the micropump chamber and channel. By experiment apparatus setup, we can get the real-time values of the flow rate of micropump, the displacement of the piezoelectric actuator and the deformation of the check valve, simultaneously. The micropump with check valve 0.4 mm in thickness obtained higher output performance under the sinusoidal waveform of 120 Vpp. The micropump achieved the maximum pumping rates of 42.2 ml/min and back pressure of 14.0 kPa at the corresponding frequency of 28 and 20 Hz. The presented micropump is able to pump gases with a pumping rate of 196 ml/min at operating frequencies of 280 Hz under the sinusoidal waveform of 120 Vpp.
Abstract: Super-resolution is nowadays used for a high-resolution
image produced from several low-resolution noisy frames. In
this work, we consider the problem of high-quality interpolation of a
single noise-free image. Such images may come from different sources,
i.e., they may be frames of videos, individual pictures, etc. On
the other hand, in the encoder we apply a downsampling via
bidimen-sional interpolation of each frame, and in the decoder we
apply a upsampling by which we restore the original size of the
image. If the compression ratio is very high, then we use a
convolutive mask that restores the edges, eliminating the blur.
Finally, both, the encoder and the complete decoder are implemented
on General-Purpose computation on Graphics Processing Units
(GPGPU) cards. In fact, the mentioned mask is coded inside texture
memory of a GPGPU.
Abstract: In the framework of the image compression by
Wavelet Transforms, we propose a perceptual method by
incorporating Human Visual System (HVS) characteristics in the
quantization stage. Indeed, human eyes haven-t an equal sensitivity
across the frequency bandwidth. Therefore, the clarity of the
reconstructed images can be improved by weighting the quantization
according to the Contrast Sensitivity Function (CSF). The visual
artifact at low bit rate is minimized. To evaluate our method, we use
the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria
witch takes into account visual criteria. The experimental results
illustrate that our technique shows improvement on image quality at
the same compression ratio.
Abstract: Compression algorithms reduce the redundancy in
data representation to decrease the storage required for that data.
Lossless compression researchers have developed highly
sophisticated approaches, such as Huffman encoding, arithmetic
encoding, the Lempel-Ziv (LZ) family, Dynamic Markov
Compression (DMC), Prediction by Partial Matching (PPM), and
Burrows-Wheeler Transform (BWT) based algorithms.
Decompression is also required to retrieve the original data by
lossless means. A compression scheme for text files coupled with
the principle of dynamic decompression, which decompresses only
the section of the compressed text file required by the user instead of
decompressing the entire text file. Dynamic decompressed files offer
better disk space utilization due to higher compression ratios
compared to most of the currently available text file formats.
Abstract: In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.
Abstract: We suggest a novel method to incorporate longterm
redundancy (LTR) in signal time domain compression
methods. The proposition is based on block-sorting and curve
simplification. The proposition is illustrated on the ECG
signal as a post-processor for the FAN method. Test
applications on the new so-obtained FAN+ method using the
MIT-BIH database show substantial improvement of the
compression ratio-distortion behavior for a higher quality
reconstructed signal.
Abstract: The fundamental aim of extended expansion concept is
to achieve higher work done which in turn leads to higher thermal
efficiency. This concept is compatible with the application of
turbocharger and LHR engine. The Low Heat Rejection engine was
developed by coating the piston crown, cylinder head inside with
valves and cylinder liner with partially stabilized zirconia coating of
0.5 mm thickness. Extended expansion in diesel engines is termed as
Miller cycle in which the expansion ratio is increased by reducing the
compression ratio by modifying the inlet cam for late inlet valve
closing. The specific fuel consumption reduces to an appreciable level
and the thermal efficiency of the extended expansion turbocharged
LHR engine is improved.
In this work, a thermodynamic model was formulated and
developed to simulate the LHR based extended expansion
turbocharged direct injection diesel engine. It includes a gas flow
model, a heat transfer model, and a two zone combustion model. Gas
exchange model is modified by incorporating the Miller cycle, by
delaying inlet valve closing timing which had resulted in considerable
improvement in thermal efficiency of turbocharged LHR engines. The
heat transfer model, calculates the convective and radiative heat
transfer between the gas and wall by taking into account of the
combustion chamber surface temperature swings. Using the two-zone
combustion model, the combustion parameters and the chemical
equilibrium compositions were determined. The chemical equilibrium
compositions were used to calculate the Nitric oxide formation rate by
assuming a modified Zeldovich mechanism. The accuracy of this
model is scrutinized against actual test results from the engine. The
factors which affect thermal efficiency and exhaust emissions were
deduced and their influences were discussed. In the final analysis it is
seen that there is an excellent agreement in all of these evaluations.
Abstract: Image compression is one of the most important
applications Digital Image Processing. Advanced medical imaging
requires storage of large quantities of digitized clinical data. Due to
the constrained bandwidth and storage capacity, however, a medical
image must be compressed before transmission and storage. There
are two types of compression methods, lossless and lossy. In Lossless
compression method the original image is retrieved without any
distortion. In lossy compression method, the reconstructed images
contain some distortion. Direct Cosine Transform (DCT) and Fractal
Image Compression (FIC) are types of lossy compression methods.
This work shows that lossy compression methods can be chosen for
medical image compression without significant degradation of the
image quality. In this work DCT and Fractal Compression using
Partitioned Iterated Function Systems (PIFS) are applied on different
modalities of images like CT Scan, Ultrasound, Angiogram, X-ray
and mammogram. Approximately 20 images are considered in each
modality and the average values of compression ratio and Peak
Signal to Noise Ratio (PSNR) are computed and studied. The quality
of the reconstructed image is arrived by the PSNR values. Based on
the results it can be concluded that the DCT has higher PSNR values
and FIC has higher compression ratio. Hence in medical image
compression, DCT can be used wherever picture quality is preferred
and FIC is used wherever compression of images for storage and
transmission is the priority, without loosing picture quality
diagnostically.
Abstract: In this paper we are to find the optimum
multiwavelet for compression of electrocardiogram (ECG)
signals. At present, it is not well known which multiwavelet is
the best choice for optimum compression of ECG. In this
work, we examine different multiwavelets on 24 sets of ECG
data with entirely different characteristics, selected from MITBIH
database. For assessing the functionality of the different
multiwavelets in compressing ECG signals, in addition to
known factors such as Compression Ratio (CR), Percent Root
Difference (PRD), Distortion (D), Root Mean Square Error
(RMSE) in compression literature, we also employed the
Cross Correlation (CC) criterion for studying the
morphological relations between the reconstructed and the
original ECG signal and Signal to reconstruction Noise Ratio
(SNR). The simulation results show that the cardbal2 by the
means of identity (Id) prefiltering method to be the best
effective transformation.
Abstract: Discrete Wavelet Transform (DWT) has demonstrated
far superior to previous Discrete Cosine Transform (DCT) and
standard JPEG in natural as well as medical image compression. Due
to its localization properties both in special and transform domain,
the quantization error introduced in DWT does not propagate
globally as in DCT. Moreover, DWT is a global approach that avoids
block artifacts as in the JPEG. However, recent reports on natural
image compression have shown the superior performance of
contourlet transform, a new extension to the wavelet transform in two
dimensions using nonseparable and directional filter banks,
compared to DWT. It is mostly due to the optimality of contourlet in
representing the edges when they are smooth curves. In this work, we
investigate this fact for medical images, especially for CT images,
which has not been reported yet. To do that, we propose a
compression scheme in transform domain and compare the
performance of both DWT and contourlet transform in PSNR for
different compression ratios (CR) using this scheme. The results
obtained using different type of computed tomography images show
that the DWT has still good performance at lower CR but contourlet
transform performs better at higher CR.
Abstract: Regenerative gas turbine engine cycle is presented that yields higher cycle efficiencies than simple cycle operating under the same conditions. The power output, efficiency and specific fuel consumption are simulated with respect to operating conditions. The analytical formulae about the relation to determine the thermal efficiency are derived taking into account the effected operation conditions (ambient temperature, compression ratio, regenerator effectiveness, compressor efficiency, turbine efficiency and turbine inlet temperature). Model calculations for a wide range of parameters are presented, as are comparisons with simple gas turbine cycle. The power output and thermal efficiency are found to be increasing with the regenerative effectiveness, and the compressor and turbine efficiencies. The efficiency increased with increase the compression ratio to 5, then efficiency decreased with increased compression ratio, but in simple cycle the thermal efficiency always increase with increased in compression ratio. The increased in ambient temperature caused decreased thermal efficiency, but the increased in turbine inlet temperature increase thermal efficiency.
Abstract: A simple but effective digital watermarking scheme
utilizing a context adaptive variable length coding (CAVLC) method
is presented for wireless communication system. In the proposed
approach, the watermark bits are embedded in the final non-zero
quantized coefficient of each DCT block, thereby yielding a potential
reduction in the length of the coded block. As a result, the
watermarking scheme not only provides the means to check the
authenticity and integrity of the video stream, but also improves the
compression ratio and therefore reduces both the transmission time
and the storage space requirements of the coded video sequence. The
results confirm that the proposed scheme enables the detection of
malicious tampering attacks and reduces the size of the coded H.264
file. Therefore, the current study is feasible to apply in the video
applications of wireless communication such as 3G system
Abstract: This paper studies the effect of different compression
constraints and schemes presented in a new and flexible paradigm to
achieve high compression ratios and acceptable signal to noise ratios
of Arabic speech signals. Compression parameters are computed for
variable frame sizes of a level 5 to 7 Discrete Wavelet Transform
(DWT) representation of the signals for different analyzing mother
wavelet functions. Results are obtained and compared for Global
threshold and level dependent threshold techniques. The results
obtained also include comparisons with Signal to Noise Ratios, Peak
Signal to Noise Ratios and Normalized Root Mean Square Error.
Abstract: Wavelet transforms are multiresolution
decompositions that can be used to analyze signals and images.
Image compression is one of major applications of wavelet
transforms in image processing. It is considered as one of the most
powerful methods that provides a high compression ratio. However,
its implementation is very time-consuming. At the other hand,
parallel computing technologies are an efficient method for image
compression using wavelets. In this paper, we propose a parallel
wavelet compression algorithm based on quadtrees. We implement
the algorithm using MatlabMPI (a parallel, message passing version
of Matlab), and compute its isoefficiency function, and show that it is
scalable. Our experimental results confirm the efficiency of the
algorithm also.
Abstract: In this study, effects of EGR on CO and HC emissions
of a dual fuel HCCI-DI engine are investigated. Tests were
conducted on a single-cylinder variable compression ratio (VCR)
diesel engine with compression ratio of 17.5. Premixed gasoline is
provided by a carburetor connected to intake manifold and equipped
with a screw to adjust premixed air-fuel ratio, and diesel fuel is
injected directly into the cylinder through an injector at pressure of
250 bars. A heater placed at inlet manifold is used to control the
intake charge temperature. Optimal intake charge temperature was
110-115ºC due to better formation of a homogeneous mixture
causing HCCI combustion. Timing of diesel fuel injection has a great
effect on stratification of in-cylinder charge in HCCI combustion.
Experiments indicated 35 BTDC as the optimum injection timing.
Coolant temperature was maintained 50ºC during the tests. Results
show that increasing engine speed at a constant EGR rate leads to
increase in CO and UHC emissions due to the incomplete
combustion caused by shorter combustion duration and less
homogeneous mixture. Results also show that increasing EGR
reduces the amount of oxygen and leads to incomplete combustion
and therefore increases CO emission due to lower combustion
temperature. HC emission also increases as a result of lower
combustion temperatures.
Abstract: A novel file splitting technique for the reduction of the nth-order entropy of text files is proposed. The technique is based on mapping the original text file into a non-ASCII binary file using a new codeword assignment method and then the resulting binary file is split into several subfiles each contains one or more bits from each codeword of the mapped binary file. The statistical properties of the subfiles are studied and it is found that they reflect the statistical properties of the original text file which is not the case when the ASCII code is used as a mapper. The nth-order entropy of these subfiles are determined and it is found that the sum of their entropies is less than that of the original text file for the same values of extensions. These interesting statistical properties of the resulting subfiles can be used to achieve better compression ratios when conventional compression techniques are applied to these subfiles individually and on a bit-wise basis rather than on character-wise basis.
Abstract: A new hybrid coding method for compressing
animated polygonal meshes is presented. This paper assumes
the simplistic representation of the geometric data: a temporal
sequence of polygonal meshes for each discrete frame of the
animated sequence. The method utilizes a delta coding and an
octree-based method. In this hybrid method, both the octree
approach and the delta coding approach are applied to each
single frame in the animation sequence in parallel. The
approach that generates the smaller encoded file size is chosen
to encode the current frame. Given the same quality
requirement, the hybrid coding method can achieve much
higher compression ratio than the octree-only method or the
delta-only method. The hybrid approach can represent 3D
animated sequences with higher compression factors while
maintaining reasonable quality. It is easy to implement and have
a low cost encoding process and a fast decoding process, which
make it a better choice for real time application.
Abstract: This paper presented two new efficient algorithms
for contour approximation. The proposed algorithm is compared
with Ramer (good quality), Triangle (faster) and Trapezoid (fastest)
in this work; which are briefly described. Cartesian co-ordinates of
an input contour are processed in such a manner that finally
contours is presented by a set of selected vertices of the edge of the
contour. In the paper the main idea of the analyzed procedures for
contour compression is performed. For comparison, the mean
square error and signal-to-noise ratio criterions are used.
Computational time of analyzed methods is estimated depending on
a number of numerical operations. Experimental results are
obtained both in terms of image quality, compression ratios, and
speed. The main advantages of the analyzed algorithm is small
numbers of the arithmetic operations compared to the existing
algorithms.
Abstract: In this work, we developed the concept of
supercompression, i.e., compression above the compression standard
used. In this context, both compression rates are multiplied. In fact,
supercompression is based on super-resolution. That is to say,
supercompression is a data compression technique that superpose
spatial image compression on top of bit-per-pixel compression to
achieve very high compression ratios. If the compression ratio is very
high, then we use a convolutive mask inside decoder that restores the
edges, eliminating the blur. Finally, both, the encoder and the
complete decoder are implemented on General-Purpose computation
on Graphics Processing Units (GPGPU) cards. Specifically, the
mentio-ned mask is coded inside texture memory of a GPGPU.