Abstract: Medical imaging uses the advantage of digital
technology in imaging and teleradiology. In teleradiology systems
large amount of data is acquired, stored and transmitted. A major
technology that may help to solve the problems associated with the
massive data storage and data transfer capacity is data compression
and decompression. There are many methods of image compression
available. They are classified as lossless and lossy compression
methods. In lossy compression method the decompressed image
contains some distortion. Fractal image compression (FIC) is a lossy
compression method. In fractal image compression an image is
coded as a set of contractive transformations in a complete metric
space. The set of contractive transformations is guaranteed to
produce an approximation to the original image. In this paper FIC is
achieved by PIFS using quadtree partitioning. PIFS is applied on
different images like , Ultrasound, CT Scan, Angiogram, X-ray,
Mammograms. In each modality approximately twenty images are
considered and the average values of compression ratio and PSNR
values are arrived. In this method of fractal encoding, the
parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the
other standard parameters constant. For all modalities of images the
compression ratio and Peak Signal to Noise Ratio (PSNR) are
computed and studied. The quality of the decompressed image is
arrived by PSNR values. From the results it is observed that the
compression ratio increases with the tolerance factor and
mammogram has the highest compression ratio. The quality of the
image is not degraded upto an optimum value of tolerance factor,
Tmax, equal to 8, because of the properties of fractal compression.
Abstract: Among other factors that characterize satellite communication
channels is their high bit error rate. We present a system for
still image transmission over noisy satellite channels. The system
couples image compression together with error control codes to
improve the received image quality while maintaining its bandwidth
requirements. The proposed system is tested using a high resolution
satellite imagery simulated over the Rician fading channel. Evaluation
results show improvement in overall system including image quality
and bandwidth requirements compared to similar systems with different
coding schemes.
Abstract: A new code synchronization algorithm is proposed in
this paper for the secondary cell-search stage in wideband CDMA
systems. Rather than using the Cyclically Permutable (CP) code in the
Secondary Synchronization Channel (S-SCH) to simultaneously
determine the frame boundary and scrambling code group, the new
synchronization algorithm implements the same function with less
system complexity and less Mean Acquisition Time (MAT). The
Secondary Synchronization Code (SSC) is redesigned by splitting into
two sub-sequences. We treat the information of scrambling code group
as data bits and use simple time diversity BCH coding for further
reliability. It avoids involved and time-costly Reed-Solomon (RS)
code computations and comparisons. Analysis and simulation results
show that the Synchronization Error Rate (SER) yielded by the new
algorithm in Rayleigh fading channels is close to that of the
conventional algorithm in the standard. This new synchronization
algorithm reduces system complexities, shortens the average
cell-search time and can be implemented in the slot-based cell-search
pipeline. By taking antenna diversity and pipelining correlation
processes, the new algorithm also shows its flexible application in
multiple antenna systems.
Abstract: Ringing effect is one of the most annoying visual
artifacts in digital video. It is a significant factor of subjective quality
deterioration. However, there is a widely-accepted misunderstanding
of its cause. In this paper, we propose a reasonable interpretation of the
cause of ringing effect. Based on the interpretation, we suggest further
two methods to reduce ringing effect in DCT-based video coding. The
methods adaptively adjust quantizers according to video features. Our
experiments proved that the methods could efficiently improve
subjective quality with acceptable additional computing costs.
Abstract: MicroRNAs (miRNAs) are a class of non-coding
RNAs that hybridize to mRNAs and induce either translation
repression or mRNA cleavage. Recently, it has been reported that
miRNAs could possibly play an important role in human diseases. By
integrating miRNA target genes, cancer genes, miRNA and mRNA
expression profiles information, a database is developed to link
miRNAs to cancer target genes. The database provides experimentally
verified human miRNA target genes information, including oncogenes
and tumor suppressor genes. In addition, fragile sites information for
miRNAs, and the strength of the correlation of miRNA and its target
mRNA expression level for nine tissue types are computed, which
serve as an indicator for suggesting miRNAs could play a role in
human cancer. The database is freely accessible at
http://ppi.bioinfo.asia.edu.tw/mirna_target/index.html.
Abstract: Unlike general-purpose processors, digital signal
processors (DSP processors) are strongly application-dependent. To
meet the needs for diverse applications, a wide variety of DSP
processors based on different architectures ranging from the
traditional to VLIW have been introduced to the market over the
years. The functionality, performance, and cost of these processors
vary over a wide range. In order to select a processor that meets the
design criteria for an application, processor performance is usually
the major concern for digital signal processing (DSP) application
developers. Performance data are also essential for the designers of
DSP processors to improve their design. Consequently, several DSP
performance benchmarks have been proposed over the past decade or
so. However, none of these benchmarks seem to have included recent
new DSP applications.
In this paper, we use a new benchmark that we recently developed
to compare the performance of popular DSP processors from Texas
Instruments and StarCore. The new benchmark is based on the
Selectable Mode Vocoder (SMV), a speech-coding program from the
recent third generation (3G) wireless voice applications. All
benchmark kernels are compiled by the compilers of the respective
DSP processors and run on their simulators. Weighted arithmetic
mean of clock cycles and arithmetic mean of code size are used to
compare the performance of five DSP processors.
In addition, we studied how the performance of a processor is
affected by code structure, features of processor architecture and
optimization of compiler. The extensive experimental data gathered,
analyzed, and presented in this paper should be helpful for DSP
processor and compiler designers to meet their specific design goals.
Abstract: The development of the signal compression
algorithms is having compressive progress. These algorithms are
continuously improved by new tools and aim to reduce, an average,
the number of bits necessary to the signal representation by means of
minimizing the reconstruction error. The following article proposes
the compression of Arabic speech signal by a hybrid method
combining the wavelet transform and the linear prediction. The
adopted approach rests, on one hand, on the original signal
decomposition by ways of analysis filters, which is followed by the
compression stage, and on the other hand, on the application of the
order 5, as well as, the compression signal coefficients. The aim of
this approach is the estimation of the predicted error, which will be
coded and transmitted. The decoding operation is then used to
reconstitute the original signal. Thus, the adequate choice of the
bench of filters is useful to the transform in necessary to increase the
compression rate and induce an impercevable distortion from an
auditive point of view.
Abstract: MicroRNAs (miRNAs) are small, non-coding and
regulatory RNAs about 20 to 24 nucleotides long. Their conserved
nature among the various organisms makes them a good source of
new miRNAs discovery by comparative genomics approach. The
study resulted in 21 miRNAs of 20 pre-miRNAs belonging to 16
families (miR156, 157, 158, 164, 165, 168, 169, 172, 319, 390, 393,
394, 395, 400, 472 and 861) in evergreen spruce tree (Picea). The
miRNA families; miR 157, 158, 164, 165, 168, 169, 319, 390, 393,
394, 400, 472 and 861 are reported for the first time in the Picea. All
20 miRNA precursors form stable minimum free energy stem-loop
structure as their orthologues form in Arabidopsis and the mature
miRNA reside in the stem portion of the stem loop structure. Sixteen
(16) miRNAs are from Picea glauca and five (5) belong to Picea
sitchensis. Their targets consist of transcription factors, growth
related, stressed related and hypothetical proteins.
Abstract: Electrocardiogram (ECG) data compression algorithm
is needed that will reduce the amount of data to be transmitted, stored
and analyzed, but without losing the clinical information content. A
wavelet ECG data codec based on the Set Partitioning In Hierarchical
Trees (SPIHT) compression algorithm is proposed in this paper. The
SPIHT algorithm has achieved notable success in still image coding.
We modified the algorithm for the one-dimensional (1-D) case and
applied it to compression of ECG data.
By this compression method, small percent root mean square
difference (PRD) and high compression ratio with low
implementation complexity are achieved. Experiments on selected
records from the MIT-BIH arrhythmia database revealed that the
proposed codec is significantly more efficient in compression and in
computation than previously proposed ECG compression schemes.
Compression ratios of up to 48:1 for ECG signals lead to acceptable
results for visual inspection.
Abstract: We study in this paper the effect of the scene
changing on image sequences coding system using Embedded
Zerotree Wavelet (EZW). The scene changing considered here is the
full motion which may occurs. A special image sequence is generated
where the scene changing occurs randomly. Two scenarios are
considered: In the first scenario, the system must provide the
reconstruction quality as best as possible by the management of the
bit rate (BR) while the scene changing occurs. In the second scenario,
the system must keep the bit rate as constant as possible by the
management of the reconstruction quality. The first scenario may be
motivated by the availability of a large band pass transmission
channel where an increase of the bit rate may be possible to keep the
reconstruction quality up to a given threshold. The second scenario
may be concerned by the narrow band pass transmission channel
where an increase of the bit rate is not possible. In this last case,
applications for which the reconstruction quality is not a constraint
may be considered. The simulations are performed with five scales
wavelet decomposition using the 9/7-tap filter bank biorthogonal
wavelet. The entropy coding is performed using a specific defined
binary code book and EZW algorithm. Experimental results are
presented and compared to LEAD H263 EVAL. It is shown that if
the reconstruction quality is the constraint, the system increases the
bit rate to obtain the required quality. In the case where the bit rate
must be constant, the system is unable to provide the required quality
if the scene change occurs; however, the system is able to improve
the quality while the scene changing disappears.
Abstract: Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.
Abstract: In the recent past Learning Classifier Systems have
been successfully used for data mining. Learning Classifier System
(LCS) is basically a machine learning technique which combines
evolutionary computing, reinforcement learning, supervised or
unsupervised learning and heuristics to produce adaptive systems. A
LCS learns by interacting with an environment from which it
receives feedback in the form of numerical reward. Learning is
achieved by trying to maximize the amount of reward received. All
LCSs models more or less, comprise four main components; a finite
population of condition–action rules, called classifiers; the
performance component, which governs the interaction with the
environment; the credit assignment component, which distributes the
reward received from the environment to the classifiers accountable
for the rewards obtained; the discovery component, which is
responsible for discovering better rules and improving existing ones
through a genetic algorithm. The concatenate of the production rules
in the LCS form the genotype, and therefore the GA should operate
on a population of classifier systems. This approach is known as the
'Pittsburgh' Classifier Systems. Other LCS that perform their GA at
the rule level within a population are known as 'Mitchigan' Classifier
Systems. The most predominant representation of the discovered
knowledge is the standard production rules (PRs) in the form of IF P
THEN D. The PRs, however, are unable to handle exceptions and do
not exhibit variable precision. The Censored Production Rules
(CPRs), an extension of PRs, were proposed by Michalski and
Winston that exhibit variable precision and supports an efficient
mechanism for handling exceptions. A CPR is an augmented
production rule of the form: IF P THEN D UNLESS C, where
Censor C is an exception to the rule. Such rules are employed in
situations, in which conditional statement IF P THEN D holds
frequently and the assertion C holds rarely. By using a rule of this
type we are free to ignore the exception conditions, when the
resources needed to establish its presence are tight or there is simply
no information available as to whether it holds or not. Thus, the IF P
THEN D part of CPR expresses important information, while the
UNLESS C part acts only as a switch and changes the polarity of D
to ~D. In this paper Pittsburgh style LCSs approach is used for
automated discovery of CPRs. An appropriate encoding scheme is
suggested to represent a chromosome consisting of fixed size set of
CPRs. Suitable genetic operators are designed for the set of CPRs
and individual CPRs and also appropriate fitness function is proposed
that incorporates basic constraints on CPR. Experimental results are
presented to demonstrate the performance of the proposed learning
classifier system.
Abstract: Link reliability and transmitted power are two important design constraints in wireless network design. Error control coding (ECC) is a classic approach used to increase link reliability and to lower the required transmitted power. It provides coding gain, resulting in transmitter energy savings at the cost of added decoder power consumption. But the choice of ECC is very critical in the case of wireless sensor network (WSN). Since the WSNs are energy constraint in nature, both the BER and power consumption has to be taken into count. This paper develops a step by step approach in finding suitable error control codes for WSNs. Several simulations are taken considering different error control codes and the result shows that the RS(31,21) fits both in BER and power consumption criteria.
Abstract: This paper discusses the designing of knowledge
integration of clinical information extracted from distributed medical
ontologies in order to ameliorate a machine learning-based multilabel
coding assignment system. The proposed approach is
implemented using a decision tree technique of the machine learning
on the university hospital data for patients with Coronary Heart
Disease (CHD). The preliminary results obtained show a satisfactory
finding that the use of medical ontologies improves the overall
system performance.
Abstract: In this paper, we proposed an efficient data
compression strategy exploiting the multi-resolution characteristic of
the wavelet transform. We have developed a sensor node called
“Smart Sensor Node; SSN". The main goals of the SSN design are
lightweight, minimal power consumption, modular design and robust
circuitry. The SSN is made up of four basic components which are a
sensing unit, a processing unit, a transceiver unit and a power unit.
FiOStd evaluation board is chosen as the main controller of the SSN
for its low costs and high performance. The software coding of the
implementation was done using Simulink model and MATLAB
programming language. The experimental results show that the
proposed data compression technique yields recover signal with good
quality. This technique can be applied to compress the collected data
to reduce the data communication as well as the energy consumption
of the sensor and so the lifetime of sensor node can be extended.
Abstract: In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.
Abstract: Quality evaluation of an image is an important task in image processing applications. In case of image compression, quality of decompressed image is also the criterion for evaluation of given coding scheme. In the process of compression -decompression various artifacts such as blocking artifacts, blur artifact, ringing or edge artifact are observed. However quantification of these artifacts is a difficult task. We propose here novel method to quantify blur and ringing artifact in an image.
Abstract: A new approach for the improvement of coding gain
in channel coding using Advanced Encryption Standard (AES) and
Maximum A Posteriori (MAP) algorithm is proposed. This new
approach uses the avalanche effect of block cipher algorithm AES
and soft output values of MAP decoding algorithm. The performance
of proposed approach is evaluated in the presence of Additive White
Gaussian Noise (AWGN). For the verification of proposed approach,
computer simulation results are included.
Abstract: In this paper, we evaluate the performance of some wavelet based coding algorithms such as 3D QT-L, 3D SPIHT and JPEG2K. In the first step we achieve an objective comparison between three coders, namely 3D SPIHT, 3D QT-L and JPEG2K. For this purpose, eight MRI head scan test sets of 256 x 256x124 voxels have been used. Results show superior performance of 3D SPIHT algorithm, whereas 3D QT-L outperforms JPEG2K. The second step consists of evaluating the robustness of 3D SPIHT and JPEG2K coding algorithm over wireless transmission. Compressed dataset images are then transmitted over AWGN wireless channel or over Rayleigh wireless channel. Results show the superiority of JPEG2K over these two models. In fact, it has been deduced that JPEG2K is more robust regarding coding errors. Thus we may conclude the necessity of using corrector codes in order to protect the transmitted medical information.
Abstract: The H.264/AVC standard is a highly efficient video
codec providing high-quality videos at low bit-rates. As employing
advanced techniques, the computational complexity has been
increased. The complexity brings about the major problem in the
implementation of a real-time encoder and decoder. Parallelism is the
one of approaches which can be implemented by multi-core system.
We analyze macroblock-level parallelism which ensures the same bit
rate with high concurrency of processors. In order to reduce the
encoding time, dynamic data partition based on macroblock region is
proposed. The data partition has the advantages in load balancing and
data communication overhead. Using the data partition, the encoder
obtains more than 3.59x speed-up on a four-processor system. This
work can be applied to other multimedia processing applications.