Abstract: In this paper problem of edge detection in digital images is considered. Edge detection based on morphological operators was applied on two sets (brain & chest) ct images. Three methods of edge detection by applying line morphological filters with multi structures in different directions have been used. 3x3 filter for first method, 5x5 filter for second method, and 7x7 filter for third method. We had applied this algorithm on (13 images) under MATLAB program environment. In order to evaluate the performance of the above mentioned edge detection algorithms, standard deviation (SD) and peak signal to noise ratio (PSNR) were used for justification for all different ct images. The objective method and the comparison of different methods of edge detection, shows that high values of both standard deviation and PSNR values of edge detection images were obtained.
Abstract: Power line channel is proposed as an alternative for broadband data transmission especially in developing countries like Tanzania [1]. However the channel is affected by stochastic attenuation and deep notches which can lead to the limitation of channel capacity and achievable data rate. Various studies have characterized the channel without giving exactly the maximum performance and limitation in data transfer rate may be this is due to complexity of channel modeling being used. In this paper the channel performance of medium voltage, low voltage and indoor power line channel is presented. In the investigations orthogonal frequency division multiplexing (OFDM) with phase shift keying (PSK) as carrier modulation schemes is considered, for indoor, medium and low voltage channels with typical ten branches and also Golay coding is applied for medium voltage channel. From channels, frequency response deep notches are observed in various frequencies which can lead to reduce the achievable data rate. However, is observed that data rate up to 240Mbps is realized for a signal to noise ratio of about 50dB for indoor and low voltage channels, however for medium voltage a typical link with ten branches is affected by strong multipath and coding is required for feasible broadband data transfer.
Abstract: An image compression method has been developed
using fuzzy edge image utilizing the basic Block Truncation Coding
(BTC) algorithm. The fuzzy edge image has been validated with
classical edge detectors on the basis of the results of the well-known
Canny edge detector prior to applying to the proposed method. The
bit plane generated by the conventional BTC method is replaced with
the fuzzy bit plane generated by the logical OR operation between
the fuzzy edge image and the corresponding conventional BTC bit
plane. The input image is encoded with the block mean and standard
deviation and the fuzzy bit plane. The proposed method has been
tested with test images of 8 bits/pixel and size 512×512 and found to
be superior with better Peak Signal to Noise Ratio (PSNR) when
compared to the conventional BTC, and adaptive bit plane selection
BTC (ABTC) methods. The raggedness and jagged appearance, and
the ringing artifacts at sharp edges are greatly reduced in
reconstructed images by the proposed method with the fuzzy bit
plane.
Abstract: This paper presents an adaptive motion estimator
that can be dynamically reconfigured by the best algorithm
depending on the variation of the video nature during the lifetime
of an application under running. The 4 Step Search (4SS) and the
Gradient Search (GS) algorithms are integrated in the estimator in
order to be used in the case of rapid and slow video sequences
respectively. The Full Search Block Matching (FSBM) algorithm
has been also integrated in order to be used in the case of the
video sequences which are not real time oriented.
In order to efficiently reduce the computational cost while
achieving better visual quality with low cost power, the proposed
motion estimator is based on a Variable Block Size (VBS) scheme
that uses only the 16x16, 16x8, 8x16 and 8x8 modes.
Experimental results show that the adaptive motion estimator
allows better results in term of Peak Signal to Noise Ratio
(PSNR), computational cost, FPGA occupied area, and dissipated
power relatively to the most popular variable block size schemes
presented in the literature.
Abstract: In high bitrate information hiding techniques, 1 bit is
embedded within each 4 x 4 Discrete Cosine Transform (DCT)
coefficient block by means of vector quantization, then the hidden bit
can be effectively extracted in terminal end. In this paper high bitrate
information hiding algorithms are summarized, and the scheme of
video in video is implemented. Experimental result shows that the host
video which is embedded numerous auxiliary information have little
visually quality decline. Peak Signal to Noise Ratio (PSNR)Y of host
video only degrades 0.22dB in average, while the hidden information
has a high percentage of survives and keeps a high robustness in
H.264/AVC compression, the average Bit Error Rate(BER) of hiding
information is 0.015%.
Abstract: Decision feedback equalizers are commonly employed to reduce the error caused by intersymbol interference. Here, an adaptive decision feedback equalizer is presented with a new adaptation algorithm. The algorithm follows a block-based approach of normalized least mean square (NLMS) algorithm with set-membership filtering and achieves a significantly less computational complexity over its conventional NLMS counterpart with set-membership filtering. It is shown in the results that the proposed algorithm yields similar type of bit error rate performance over a reasonable signal to noise ratio in comparison with the latter one.
Abstract: In the framework of the image compression by
Wavelet Transforms, we propose a perceptual method by
incorporating Human Visual System (HVS) characteristics in the
quantization stage. Indeed, human eyes haven-t an equal sensitivity
across the frequency bandwidth. Therefore, the clarity of the
reconstructed images can be improved by weighting the quantization
according to the Contrast Sensitivity Function (CSF). The visual
artifact at low bit rate is minimized. To evaluate our method, we use
the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria
witch takes into account visual criteria. The experimental results
illustrate that our technique shows improvement on image quality at
the same compression ratio.
Abstract: Image compression is one of the most important
applications Digital Image Processing. Advanced medical imaging
requires storage of large quantities of digitized clinical data. Due to
the constrained bandwidth and storage capacity, however, a medical
image must be compressed before transmission and storage. There
are two types of compression methods, lossless and lossy. In Lossless
compression method the original image is retrieved without any
distortion. In lossy compression method, the reconstructed images
contain some distortion. Direct Cosine Transform (DCT) and Fractal
Image Compression (FIC) are types of lossy compression methods.
This work shows that lossy compression methods can be chosen for
medical image compression without significant degradation of the
image quality. In this work DCT and Fractal Compression using
Partitioned Iterated Function Systems (PIFS) are applied on different
modalities of images like CT Scan, Ultrasound, Angiogram, X-ray
and mammogram. Approximately 20 images are considered in each
modality and the average values of compression ratio and Peak
Signal to Noise Ratio (PSNR) are computed and studied. The quality
of the reconstructed image is arrived by the PSNR values. Based on
the results it can be concluded that the DCT has higher PSNR values
and FIC has higher compression ratio. Hence in medical image
compression, DCT can be used wherever picture quality is preferred
and FIC is used wherever compression of images for storage and
transmission is the priority, without loosing picture quality
diagnostically.
Abstract: This paper addresses the problem of source separation
in images. We propose a FastICA algorithm employing a modified
Gaussian contrast function for the Blind Source Separation.
Experimental result shows that the proposed Modified Gaussian
FastICA is effectively used for Blind Source Separation to obtain
better quality images. In this paper, a comparative study has been
made with other popular existing algorithms. The peak signal to
noise ratio (PSNR) and improved signal to noise ratio (ISNR) are
used as metrics for evaluating the quality of images. The ICA metric
Amari error is also used to measure the quality of separation.
Abstract: This paper proposes an efficient finite precision block floating point (BFP) treatment to the fixed coefficient finite impulse response (FIR) digital filter. The treatment includes effective implementation of all the three forms of the conventional FIR filters, namely, direct form, cascaded and par- allel, and a roundoff error analysis of them in the BFP format. An effective block formatting algorithm together with an adaptive scaling factor is pro- posed to make the realizations more simple from hardware view point. To this end, a generic relation between the tap weight vector length and the input block length is deduced. The implementation scheme also emphasises on a simple block exponent update technique to prevent overflow even during the block to block transition phase. The roundoff noise is also investigated along the analogous lines, taking into consideration these implementational issues. The simulation results show that the BFP roundoff errors depend on the sig- nal level almost in the same way as floating point roundoff noise, resulting in approximately constant signal to noise ratio over a relatively large dynamic range.
Abstract: power-line networks are promise infrastructure for
broadband services provision to end users. However, the network
performance is affected by stochastic channel changing which is due
to load impedances, number of branches and branched line lengths. It
has been proposed that multi-carrier modulations techniques such as
orthogonal frequency division multiplexing (OFDM), Multi-Carrier
Spread Spectrum (MC-SS), wavelet OFDM can be used in such
environment. This paper investigates the performance of different
indoor topologies of power-line networks that uses MC-SS
modulation scheme.It is observed that when a branch is added in the
link between sending and receiving end of an indoor channel an
average of 2.5dB power loss is found. In additional, when the branch
is added at a node an average of 1dB power loss is found.
Additionally when the terminal impedances of the branch change
from line characteristic impedance to impedance either higher or
lower values the channel performances were tremendously improved.
For example changing terminal load from characteristic impedance
(85 .) to 5 . the signal to noise ratio (SNR) required to attain the
same performances were decreased from 37dB to 24dB respectively.
Also, changing the terminal load from channel characteristic
impedance (85 .) to very higher impedance (1600 .) the SNR
required to maintain the same performances were decreased from
37dB to 23dB. The result concludes that MC-SS performs better
compared with OFDM techniques in all aspects and especially when
the channel is terminated in either higher or lower impedances.
Abstract: In this paper, the effect of transmission codes on the
performance of coherent square M-ary quadrature amplitude
modulation (CSMQAM) under hybrid selection/maximal-ratio
combining (H-S/MRC) diversity is analysed. The fading channels are
modeled as frequency non-selective slow independent and identically
distributed Rayleigh fading channels corrupted by additive white
Gaussian noise (AWGN). The results for coded MQAM are
computed numerically for the case of (24,12) extended Golay code
and compared with uncoded MQAM under H-S/MRC diversity by
plotting error probabilities versus average signal to noise ratio (SNR)
for various values L and N in order to examine the improvement in
the performance of the digital communications system as the number
of selected diversity branches is increased. The results for no
diversity, conventional SC and Lth order MRC schemes are also
plotted for comparison. Closed form analytical results derived in this
paper are sufficiently simple and therefore can be computed
numerically without any approximations. The analytical results
presented in this paper are expected to provide useful information
needed for design and analysis of digital communication systems
over wireless fading channels.
Abstract: Assessment for image quality traditionally needs its
original image as a reference. The conventional method for assessment
like Mean Square Error (MSE) or Peak Signal to Noise Ratio (PSNR)
is invalid when there is no reference. In this paper, we present a new
No-Reference (NR) assessment of image quality using blur and noise.
The recent camera applications provide high quality images by help of
digital Image Signal Processor (ISP). Since the images taken by the
high performance of digital camera have few blocking and ringing
artifacts, we only focus on the blur and noise for predicting the
objective image quality. The experimental results show that the
proposed assessment method gives high correlation with subjective
Difference Mean Opinion Score (DMOS). Furthermore, the proposed
method provides very low computational load in spatial domain and
similar extraction of characteristics to human perceptional assessment.
Abstract: To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.
Abstract: This paper studies the effect of different compression
constraints and schemes presented in a new and flexible paradigm to
achieve high compression ratios and acceptable signal to noise ratios
of Arabic speech signals. Compression parameters are computed for
variable frame sizes of a level 5 to 7 Discrete Wavelet Transform
(DWT) representation of the signals for different analyzing mother
wavelet functions. Results are obtained and compared for Global
threshold and level dependent threshold techniques. The results
obtained also include comparisons with Signal to Noise Ratios, Peak
Signal to Noise Ratios and Normalized Root Mean Square Error.
Abstract: A new method, based on the normal shrink and
modified version of Katssagelous and Lay, is proposed for multiscale
blind image restoration. The method deals with the noise and blur in
the images. It is shown that the normal shrink gives the highest S/N
(signal to noise ratio) for image denoising process. The multiscale
blind image restoration is divided in two sections. The first part of
this paper proposes normal shrink for image denoising and the
second part of paper proposes modified version of katssagelous and
Lay for blur estimation and the combination of both methods to reach
a multiscale blind image restoration.
Abstract: In wavelet regression, choosing threshold value is a crucial issue. A too large value cuts too many coefficients resulting in over smoothing. Conversely, a too small threshold value allows many coefficients to be included in reconstruction, giving a wiggly estimate which result in under smoothing. However, the proper choice of threshold can be considered as a careful balance of these principles. This paper gives a very brief introduction to some thresholding selection methods. These methods include: Universal, Sure, Ebays, Two fold cross validation and level dependent cross validation. A simulation study on a variety of sample sizes, test functions, signal-to-noise ratios is conducted to compare their numerical performances using three different noise structures. For Gaussian noise, EBayes outperforms in all cases for all used functions while Two fold cross validation provides the best results in the case of long tail noise. For large values of signal-to-noise ratios, level dependent cross validation works well under correlated noises case. As expected, increasing both sample size and level of signal to noise ratio, increases estimation efficiency.
Abstract: In this paper, we are going to determine the threshold levels of adaptive modulation in a burst by burst CDMA system by a suboptimum method so that the above method attempts to increase the average bit per symbol (BPS) rate of transceiver system by switching between the different modulation modes in variable channel condition. In this method, we choose the minimum values of average bit error rate (BER) and maximum values of average BPS on different values of average channel signal to noise ratio (SNR) and then calculate the relative threshold levels of them, so that when the instantaneous SNR increases, a higher order modulation be employed for increasing throughput and vise-versa when the instantaneous SNR decreases, a lower order modulation be employed for improvement of BER. In transmission step, by this adaptive modulation method, in according to comparison between obtained estimation of pilot symbols and a set of above suboptimum threshold levels, above system chooses one of states no transmission, BPSK, 4QAM and square 16QAM for modulation of data. The expected channel in this paper is a slow Rayleigh fading.
Abstract: A new estimator for evolutionary spectrum (ES) based
on short time Fourier transform (STFT) and modified group delay
function (MGDF) by signal decomposition (SD) is proposed. The
STFT due to its built-in averaging, suppresses the cross terms and the
MGDF preserves the frequency resolution of the rectangular window
with the reduction in the Gibbs ripple. The present work overcomes
the magnitude distortion observed in multi-component non-stationary
signals with STFT and MGDF estimation of ES using SD. The SD is
achieved either through discrete cosine transform based harmonic
wavelet transform (DCTHWT) or perfect reconstruction filter banks
(PRFB). The MGDF also improves the signal to noise ratio by
removing associated noise. The performance of the present method is
illustrated for cross chirp and frequency shift keying (FSK) signals,
which indicates that its performance is better than STFT-MGDF
(STFT-GD) alone. Further its noise immunity is better than STFT.
The SD based methods, however cannot bring out the frequency
transition path from band to band clearly, as there will be gap in the
contour plot at the transition. The PRFB based STFT-SD shows good
performance than DCTHWT decomposition method for STFT-GD.
Abstract: In this paper we present simulation results for the
application of a bandwidth efficient algorithm (mapping algorithm)
to an image transmission system. This system considers three
different real valued transforms to generate energy compact
coefficients. First results are presented for gray scale and color image
transmission in the absence of noise. It is seen that the system
performs its best when discrete cosine transform is used. Also the
performance of the system is dominated more by the size of the
transform block rather than the number of coefficients transmitted or
the number of bits used to represent each coefficient. Similar results
are obtained in the presence of additive white Gaussian noise. The
varying values of the bit error rate have very little or no impact on
the performance of the algorithm. Optimum results are obtained for
the system considering 8x8 transform block and by transmitting 15
coefficients from each block using 8 bits.