Abstract: In this paper, an improvement of PDLZW implementation
with a new dictionary updating technique is proposed. A
unique dictionary is partitioned into hierarchical variable word-width
dictionaries. This allows us to search through dictionaries in parallel.
Moreover, the barrel shifter is adopted for loading a new input string
into the shift register in order to achieve a faster speed. However,
the original PDLZW uses a simple FIFO update strategy, which is
not efficient. Therefore, a new window based updating technique
is implemented to better classify the difference in how often each
particular address in the window is referred. The freezing policy
is applied to the address most often referred, which would not be
updated until all the other addresses in the window have the same
priority. This guarantees that the more often referred addresses would
not be updated until their time comes. This updating policy leads
to an improvement on the compression efficiency of the proposed
algorithm while still keep the architecture low complexity and easy
to implement.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.
Abstract: A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.
Abstract: With the drastically growth in optical communication
technology, a lossless, low-crosstalk and multifunction optical switch
is most desirable for large-scale photonic network. To realize such a
switch, we have introduced the new architecture of optical switch
that embedded many functions on single device. The asymmetrical
architecture of OXADM consists of 3 parts; selective port, add/drop
operation, and path routing. Selective port permits only the interest
wavelength pass through and acts as a filter. While add and drop
function can be implemented in second part of OXADM architecture.
The signals can then be re-routed to any output port or/and perform
an accumulation function which multiplex all signals onto single path
and then exit to any interest output port. This will be done by path
routing operation. The unique features offered by OXADM has
extended its application to Fiber to-the Home Technology (FTTH),
here the OXADM is used as a wavelength management element in
Optical Line Terminal (OLT). Each port is assigned specifically with
the operating wavelengths and with the dynamic routing management
to ensure no traffic combustion occurs in OLT.
Abstract: Breast cancer detection techniques have been reported
to aid radiologists in analyzing mammograms. We note that most
techniques are performed on uncompressed digital mammograms.
Mammogram images are huge in size necessitating the use of
compression to reduce storage/transmission requirements. In this
paper, we present an algorithm for the detection of
microcalcifications in the JPEG2000 domain. The algorithm is based
on the statistical properties of the wavelet transform that the
JPEG2000 coder employs. Simulation results were carried out at
different compression ratios. The sensitivity of this algorithm ranges
from 92% with a false positive rate of 4.7 down to 66% with a false
positive rate of 2.1 using lossless compression and lossy compression
at a compression ratio of 100:1, respectively.
Abstract: The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.
Abstract: In the paper a method of modeling text for Polish is
discussed. The method is aimed at transforming continuous input text
into a text consisting of sentences in so called canonical form, whose
characteristic is, among others, a complete structure as well as no
anaphora or ellipses. The transformation is lossless as to the content
of text being transformed. The modeling method has been worked
out for the needs of the Thetos system, which translates Polish
written texts into the Polish sign language. We believe that the
method can be also used in various applications that deal with the
natural language, e.g. in a text summary generator for Polish.
Abstract: Medical imaging uses the advantage of digital
technology in imaging and teleradiology. In teleradiology systems
large amount of data is acquired, stored and transmitted. A major
technology that may help to solve the problems associated with the
massive data storage and data transfer capacity is data compression
and decompression. There are many methods of image compression
available. They are classified as lossless and lossy compression
methods. In lossy compression method the decompressed image
contains some distortion. Fractal image compression (FIC) is a lossy
compression method. In fractal image compression an image is
coded as a set of contractive transformations in a complete metric
space. The set of contractive transformations is guaranteed to
produce an approximation to the original image. In this paper FIC is
achieved by PIFS using quadtree partitioning. PIFS is applied on
different images like , Ultrasound, CT Scan, Angiogram, X-ray,
Mammograms. In each modality approximately twenty images are
considered and the average values of compression ratio and PSNR
values are arrived. In this method of fractal encoding, the
parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the
other standard parameters constant. For all modalities of images the
compression ratio and Peak Signal to Noise Ratio (PSNR) are
computed and studied. The quality of the decompressed image is
arrived by PSNR values. From the results it is observed that the
compression ratio increases with the tolerance factor and
mammogram has the highest compression ratio. The quality of the
image is not degraded upto an optimum value of tolerance factor,
Tmax, equal to 8, because of the properties of fractal compression.
Abstract: Data compression is used operationally to reduce bandwidth and storage requirements. An efficient method for achieving lossless weather radar data compression is presented. The characteristics of the data are taken into account and the optical linear prediction is used for the PPI images in the weather radar data in the proposed method. The next PPI image is identical to the current one and a dramatic reduction in source entropy is achieved by using the prediction algorithm. Some lossless compression methods are used to compress the predicted data. Experimental results show that for the weather radar data, the method proposed in this paper outperforms the other methods.
Abstract: This article presents a current-mode quadrature
oscillator using differential different current conveyor (DDCC) and
voltage differencing transconductance amplifier (VDTA) as active
elements. The proposed circuit is realized fro m a non-inverting
lossless integrator and an inverting second order low-pass filter. The
oscillation condition and oscillation frequency can be
electronically/orthogonally controlled via input bias currents. The
circuit description is very simple, consisting of merely 1 DDCC, 1
VDTA, 1 grounded resistor and 3 grounded capacitors. Using only
grounded elements, the proposed circuit is then suitable for IC
architecture. The proposed oscillator has high output impedance
which is easy to cascade or dive the external load without the buffer
devices. The PSPICE simulation results are depicted, and the given
results agree well with the theoretical anticipation. The power
consumption is approximately 1.76mW at ±1.25V supply voltages.
Abstract: This paper presents comparative study on recent
integer DCTs and a new method to construct a low sensitive structure
of integer DCT for colored input signals. The method refers to
sensitivity of multiplier coefficients to finite word length as an
indicator of how word length truncation effects on quality of output
signal. The sensitivity is also theoretically evaluated as a function of
auto-correlation and covariance matrix of input signal. The structure of
integer DCT algorithm is optimized by combination of lower sensitive
lifting structure types of IRT. It is evaluated by the sensitivity of
multiplier coefficients to finite word length expression in a function of
covariance matrix of input signal. Effectiveness of the optimum
combination of IRT in integer DCT algorithm is confirmed by quality
improvement comparing with existing case. As a result, the optimum
combination of IRT in each integer DCT algorithm evidently improves
output signal quality and it is still compatible with the existing one.
Abstract: The join dependency provides the basis for obtaining
lossless join decomposition in a classical relational schema. The
existence of Join dependency shows that that the tables always
represent the correct data after being joined. Since the classical
relational databases cannot handle imprecise data, they were
extended to fuzzy relational databases so that uncertain, ambiguous,
imprecise and partially known information can also be stored in
databases in a formal way. However like classical databases, the
fuzzy relational databases also undergoes decomposition during
normalization, the issue of joining the decomposed fuzzy relations
remains intact. Our effort in the present paper is to emphasize on this
issue. In this paper we define fuzzy join dependency in the
framework of type-1 fuzzy relational databases & type-2 fuzzy
relational databases using the concept of fuzzy equality which is
defined using fuzzy functions. We use the fuzzy equi-join operator
for computing the fuzzy equality of two attribute values. We also
discuss the dependency preservation property on execution of this
fuzzy equi- join and derive the necessary condition for the fuzzy
functional dependencies to be preserved on joining the decomposed
fuzzy relations. We also derive the conditions for fuzzy join
dependency to exist in context of both type-1 and type-2 fuzzy
relational databases. We find that unlike the classical relational
databases even the existence of a trivial join dependency does not
ensure lossless join decomposition in type-2 fuzzy relational
databases. Finally we derive the conditions for the fuzzy equality to
be non zero and the qualification of an attribute for fuzzy key.
Abstract: The aim of this paper is to present a new method
which can be used for progressive transmission of electrocardiogram
(ECG). The idea consists in transforming any ECG signal to an
image, containing one beat in each row. In the first step, the beats are
synchronized in order to reduce the high frequencies due to inter-beat
transitions. The obtained image is then transformed using a discrete
version of Radon Transform (DRT). Hence, transmitting the ECG,
leads to transmit the most significant energy of the transformed
image in Radon domain. For decoding purpose, the receptor needs to
use the inverse Radon Transform as well as the two synchronization
frames.
The presented protocol can be adapted for lossy to lossless
compression systems. In lossy mode we show that the compression
ratio can be multiplied by an average factor of 2 for an acceptable
quality of reconstructed signal. These results have been obtained on
real signals from MIT database.