Abstract: In the course of recent decades, medical imaging has
been dominated by the use of costly film media for review and
archival of medical investigation, however due to developments in
networks technologies and common acceptance of a standard digital
imaging and communication in medicine (DICOM) another approach
in light of World Wide Web was produced. Web technologies
successfully used in telemedicine applications, the combination of
web technologies together with DICOM used to design a web-based
and open source DICOM viewer. The Web server allowance to
inquiry and recovery of images and the images viewed/manipulated
inside a Web browser without need for any preinstalling software.
The dynamic site page for medical images visualization and
processing created by using JavaScript and HTML5 advancements.
The XAMPP ‘apache server’ is used to create a local web server for
testing and deployment of the dynamic site. The web-based viewer
connected to multiples devices through local area network (LAN) to
distribute the images inside healthcare facilities. The system offers a
few focal points over ordinary picture archiving and communication
systems (PACS): easy to introduce, maintain and independently
platforms that allow images to display and manipulated efficiently,
the system also user-friendly and easy to integrate with an existing
system that have already been making use of web technologies. The
wavelet-based image compression technique on which 2-D discrete
wavelet transform used to decompose the image then wavelet
coefficients are transmitted by entropy encoding after threshold to
decrease transmission time, stockpiling cost and capacity. The
performance of compression was estimated by using images quality
metrics such as mean square error ‘MSE’, peak signal to noise ratio
‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when
‘coif3’ wavelet filter is used.
Abstract: Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.
Abstract: One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.
Abstract: Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).
Abstract: Information security plays a major role in uplifting the standard of secured communications via global media. In this paper, we have suggested a technique of encryption followed by insertion before transmission. Here, we have implemented two different concepts to carry out the above-specified tasks. We have used a two-point crossover technique of the genetic algorithm to facilitate the encryption process. For each of the uniquely identified rows of pixels, different mathematical methodologies are applied for several conditions checking, in order to figure out all the parent pixels on which we perform the crossover operation. This is done by selecting two crossover points within the pixels thereby producing the newly encrypted child pixels, and hence the encrypted cover image. In the next lap, the first and second order derivative operators are evaluated to increase the security and robustness. The last lap further ensures reapplication of the crossover procedure to form the final stego-image. The complexity of this system as a whole is huge, thereby dissuading the third party interferences. Also, the embedding capacity is very high. Therefore, a larger amount of secret image information can be hidden. The imperceptible vision of the obtained stego-image clearly proves the proficiency of this approach.
Abstract: One of the most important challenging factors in
medical images is nominated as noise. Image denoising refers to the
improvement of a digital medical image that has been infected by
Additive White Gaussian Noise (AWGN). The digital medical image
or video can be affected by different types of noises. They are
impulse noise, Poisson noise and AWGN. Computed tomography
(CT) images are subjects to low quality due to the noise. Quality of
CT images is dependent on absorbed dose to patients directly in such
a way that increase in absorbed radiation, consequently absorbed
dose to patients (ADP), enhances the CT images quality. In this
manner, noise reduction techniques on purpose of images quality
enhancement exposing no excess radiation to patients is one the
challenging problems for CT images processing. In this work, noise
reduction in CT images was performed using two different
directional 2 dimensional (2D) transformations; i.e., Curvelet and
Contourlet and Discrete Wavelet Transform (DWT) thresholding
methods of BayesShrink and AdaptShrink, compared to each other
and we proposed a new threshold in wavelet domain for not only
noise reduction but also edge retaining, consequently the proposed
method retains the modified coefficients significantly that result good
visual quality. Data evaluations were accomplished by using two
criterions; namely, peak signal to noise ratio (PSNR) and Structure
similarity (Ssim).
Abstract: In this paper a novel color image compression
technique for efficient storage and delivery of data is proposed. The
proposed compression technique started by RGB to YCbCr color
transformation process. Secondly, the canny edge detection method is
used to classify the blocks into the edge and non-edge blocks. Each
color component Y, Cb, and Cr compressed by discrete cosine
transform (DCT) process, quantizing and coding step by step using
adaptive arithmetic coding. Our technique is concerned with the
compression ratio, bits per pixel and peak signal to noise ratio, and
produce better results than JPEG and more recent published schemes
(like CBDCT-CABS and MHC). The provided experimental results
illustrate the proposed technique that is efficient and feasible in terms
of compression ratio, bits per pixel and peak signal to noise ratio.
Abstract: In this paper, Least Mean Square (LMS) adaptive
noise reduction algorithm is proposed to enhance the speech signal
from the noisy speech. In this, the speech signal is enhanced by
varying the step size as the function of the input signal. Objective and
subjective measures are made under various noises for the proposed
and existing algorithms. From the experimental results, it is seen that
the proposed LMS adaptive noise reduction algorithm reduces Mean
square Error (MSE) and Log Spectral Distance (LSD) as compared to
that of the earlier methods under various noise conditions with
different input SNR levels. In addition, the proposed algorithm
increases the Peak Signal to Noise Ratio (PSNR) and Segmental SNR
improvement (ΔSNRseg) values; improves the Mean Opinion Score
(MOS) as compared to that of the various existing LMS adaptive
noise reduction algorithms. From these experimental results, it is
observed that the proposed LMS adaptive noise reduction algorithm
reduces the speech distortion and residual noise as compared to that
of the existing methods.
Abstract: The Quad Tree Decomposition based performance
analysis of compressed image data communication for lossy and
lossless through wireless sensor network is presented. Images have
considerably higher storage requirement than text. While transmitting
a multimedia content there is chance of the packets being dropped
due to noise and interference. At the receiver end the packets that
carry valuable information might be damaged or lost due to noise,
interference and congestion. In order to avoid the valuable
information from being dropped various retransmission schemes have
been proposed. In this proposed scheme QTD is used. QTD is an
image segmentation method that divides the image into homogeneous
areas. In this proposed scheme involves analysis of parameters such
as compression ratio, peak signal to noise ratio, mean square error,
bits per pixel in compressed image and analysis of difficulties during
data packet communication in Wireless Sensor Networks. By
considering the above, this paper is to use the QTD to improve the
compression ratio as well as visual quality and the algorithm in
MATLAB 7.1 and NS2 Simulator software tool.
Abstract: This paper introduces an image denoising algorithm based on generalized Srivastava-Owa fractional differential operator for removing Gaussian noise in digital images. The structures of nxn fractional masks are constructed by this algorithm. Experiments show that, the capability of the denoising algorithm by fractional differential-based approach appears efficient to smooth the Gaussian noisy images for different noisy levels. The denoising performance is measured by using peak signal to noise ratio (PSNR) for the denoising images. The results showed an improved performance (higher PSNR values) when compared with standard Gaussian smoothing filter.
Abstract: In this paper problem of edge detection in digital images is considered. Edge detection based on morphological operators was applied on two sets (brain & chest) ct images. Three methods of edge detection by applying line morphological filters with multi structures in different directions have been used. 3x3 filter for first method, 5x5 filter for second method, and 7x7 filter for third method. We had applied this algorithm on (13 images) under MATLAB program environment. In order to evaluate the performance of the above mentioned edge detection algorithms, standard deviation (SD) and peak signal to noise ratio (PSNR) were used for justification for all different ct images. The objective method and the comparison of different methods of edge detection, shows that high values of both standard deviation and PSNR values of edge detection images were obtained.
Abstract: An image compression method has been developed
using fuzzy edge image utilizing the basic Block Truncation Coding
(BTC) algorithm. The fuzzy edge image has been validated with
classical edge detectors on the basis of the results of the well-known
Canny edge detector prior to applying to the proposed method. The
bit plane generated by the conventional BTC method is replaced with
the fuzzy bit plane generated by the logical OR operation between
the fuzzy edge image and the corresponding conventional BTC bit
plane. The input image is encoded with the block mean and standard
deviation and the fuzzy bit plane. The proposed method has been
tested with test images of 8 bits/pixel and size 512×512 and found to
be superior with better Peak Signal to Noise Ratio (PSNR) when
compared to the conventional BTC, and adaptive bit plane selection
BTC (ABTC) methods. The raggedness and jagged appearance, and
the ringing artifacts at sharp edges are greatly reduced in
reconstructed images by the proposed method with the fuzzy bit
plane.
Abstract: This paper presents an adaptive motion estimator
that can be dynamically reconfigured by the best algorithm
depending on the variation of the video nature during the lifetime
of an application under running. The 4 Step Search (4SS) and the
Gradient Search (GS) algorithms are integrated in the estimator in
order to be used in the case of rapid and slow video sequences
respectively. The Full Search Block Matching (FSBM) algorithm
has been also integrated in order to be used in the case of the
video sequences which are not real time oriented.
In order to efficiently reduce the computational cost while
achieving better visual quality with low cost power, the proposed
motion estimator is based on a Variable Block Size (VBS) scheme
that uses only the 16x16, 16x8, 8x16 and 8x8 modes.
Experimental results show that the adaptive motion estimator
allows better results in term of Peak Signal to Noise Ratio
(PSNR), computational cost, FPGA occupied area, and dissipated
power relatively to the most popular variable block size schemes
presented in the literature.
Abstract: In high bitrate information hiding techniques, 1 bit is
embedded within each 4 x 4 Discrete Cosine Transform (DCT)
coefficient block by means of vector quantization, then the hidden bit
can be effectively extracted in terminal end. In this paper high bitrate
information hiding algorithms are summarized, and the scheme of
video in video is implemented. Experimental result shows that the host
video which is embedded numerous auxiliary information have little
visually quality decline. Peak Signal to Noise Ratio (PSNR)Y of host
video only degrades 0.22dB in average, while the hidden information
has a high percentage of survives and keeps a high robustness in
H.264/AVC compression, the average Bit Error Rate(BER) of hiding
information is 0.015%.
Abstract: In the framework of the image compression by
Wavelet Transforms, we propose a perceptual method by
incorporating Human Visual System (HVS) characteristics in the
quantization stage. Indeed, human eyes haven-t an equal sensitivity
across the frequency bandwidth. Therefore, the clarity of the
reconstructed images can be improved by weighting the quantization
according to the Contrast Sensitivity Function (CSF). The visual
artifact at low bit rate is minimized. To evaluate our method, we use
the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria
witch takes into account visual criteria. The experimental results
illustrate that our technique shows improvement on image quality at
the same compression ratio.
Abstract: Assessment for image quality traditionally needs its
original image as a reference. The conventional method for assessment
like Mean Square Error (MSE) or Peak Signal to Noise Ratio (PSNR)
is invalid when there is no reference. In this paper, we present a new
No-Reference (NR) assessment of image quality using blur and noise.
The recent camera applications provide high quality images by help of
digital Image Signal Processor (ISP). Since the images taken by the
high performance of digital camera have few blocking and ringing
artifacts, we only focus on the blur and noise for predicting the
objective image quality. The experimental results show that the
proposed assessment method gives high correlation with subjective
Difference Mean Opinion Score (DMOS). Furthermore, the proposed
method provides very low computational load in spatial domain and
similar extraction of characteristics to human perceptional assessment.
Abstract: To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.
Abstract: In this paper we present simulation results for the
application of a bandwidth efficient algorithm (mapping algorithm)
to an image transmission system. This system considers three
different real valued transforms to generate energy compact
coefficients. First results are presented for gray scale and color image
transmission in the absence of noise. It is seen that the system
performs its best when discrete cosine transform is used. Also the
performance of the system is dominated more by the size of the
transform block rather than the number of coefficients transmitted or
the number of bits used to represent each coefficient. Similar results
are obtained in the presence of additive white Gaussian noise. The
varying values of the bit error rate have very little or no impact on
the performance of the algorithm. Optimum results are obtained for
the system considering 8x8 transform block and by transmitting 15
coefficients from each block using 8 bits.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: In this paper, a watermarking algorithm that uses the wavelet transform with Multiple Description Coding (MDC) and Quantization Index Modulation (QIM) concepts is introduced. Also, the paper investigates the role of Contourlet Transform (CT) versus Wavelet Transform (WT) in providing robust image watermarking. Two measures are utilized in the comparison between the waveletbased and the contourlet-based methods; Peak Signal to Noise Ratio (PSNR) and Normalized Cross-Correlation (NCC). Experimental results reveal that the introduced algorithm is robust against different attacks and has good results compared to the contourlet-based algorithm.