Abstract: Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Abstract: The photoacoustic images are obtained from a custom developed linear array photoacoustic tomography system. The biological specimens are imitated by conducting phantom tests in order to retrieve a fully functional photoacoustic image. The acquired image undergoes the active region based contour filtering to remove the noise and accurately segment the object area for further processing. The universal back projection method is used as the image reconstruction algorithm. The active contour filtering is analyzed by evaluating the signal to noise ratio and comparing it with the other filtering methods.
Abstract: Information security plays a major role in uplifting the standard of secured communications via global media. In this paper, we have suggested a technique of encryption followed by insertion before transmission. Here, we have implemented two different concepts to carry out the above-specified tasks. We have used a two-point crossover technique of the genetic algorithm to facilitate the encryption process. For each of the uniquely identified rows of pixels, different mathematical methodologies are applied for several conditions checking, in order to figure out all the parent pixels on which we perform the crossover operation. This is done by selecting two crossover points within the pixels thereby producing the newly encrypted child pixels, and hence the encrypted cover image. In the next lap, the first and second order derivative operators are evaluated to increase the security and robustness. The last lap further ensures reapplication of the crossover procedure to form the final stego-image. The complexity of this system as a whole is huge, thereby dissuading the third party interferences. Also, the embedding capacity is very high. Therefore, a larger amount of secret image information can be hidden. The imperceptible vision of the obtained stego-image clearly proves the proficiency of this approach.
Abstract: Multiple Input Multiple Output (MIMO) systems are
wireless systems with multiple antenna elements at both ends of the
link. Wireless communication systems demand high data rate and
spectral efficiency with increased reliability. MIMO systems have
been popular techniques to achieve these goals because increased
data rate is possible through spatial multiplexing scheme and
diversity. Spatial Multiplexing (SM) is used to achieve higher
possible throughput than diversity. In this paper, we propose a Zero-
Forcing (ZF) detection using a combination of Ordered Successive
Interference Cancellation (OSIC) and Zero Forcing using
Interference Cancellation (ZF-IC). The proposed method used an
OSIC based on Signal to Noise Ratio (SNR) ordering to get the
estimation of last symbol, then the estimated last symbol is
considered to be an input to the ZF-IC. We analyze the Bit Error Rate
(BER) performance of the proposed MIMO system over Rayleigh
Fading Channel, using Binary Phase Shift Keying (BPSK)
modulation scheme. The results show better performance than the
previous methods.
Abstract: This paper addresses the issue of resource allocation
in the emerging cognitive technology. Focusing the Quality of
Service (QoS) of Primary Users (PU), a novel method is proposed for
the resource allocation of Secondary Users (SU). In this paper, we
propose the unique Utility Function in the game theoretic model of
Cognitive Radio which can be maximized to increase the capacity of
the Cognitive Radio Network (CRN) and to minimize the
interference scenario. Utility function is formulated to cater the need
of PUs by observing Signal to Noise ratio. Existence of Nash
Equilibrium for the postulated game is established.
Abstract: One of the most important challenging factors in
medical images is nominated as noise. Image denoising refers to the
improvement of a digital medical image that has been infected by
Additive White Gaussian Noise (AWGN). The digital medical image
or video can be affected by different types of noises. They are
impulse noise, Poisson noise and AWGN. Computed tomography
(CT) images are subjects to low quality due to the noise. Quality of
CT images is dependent on absorbed dose to patients directly in such
a way that increase in absorbed radiation, consequently absorbed
dose to patients (ADP), enhances the CT images quality. In this
manner, noise reduction techniques on purpose of images quality
enhancement exposing no excess radiation to patients is one the
challenging problems for CT images processing. In this work, noise
reduction in CT images was performed using two different
directional 2 dimensional (2D) transformations; i.e., Curvelet and
Contourlet and Discrete Wavelet Transform (DWT) thresholding
methods of BayesShrink and AdaptShrink, compared to each other
and we proposed a new threshold in wavelet domain for not only
noise reduction but also edge retaining, consequently the proposed
method retains the modified coefficients significantly that result good
visual quality. Data evaluations were accomplished by using two
criterions; namely, peak signal to noise ratio (PSNR) and Structure
similarity (Ssim).
Abstract: Multiple-input multiple-output (MIMO) radar has
received increasing attention in recent years. MIMO radar has many
advantages over conventional phased array radar such as target
detection,resolution enhancement, and interference suppression. In
this paper, the results are presented from a simulation study of MIMO
uniformly-spaced linear array (ULA) antennas. The performance is
investigated under varied parameters, including varied array size,
pseudo random (PN) sequence length, number of snapshots, and
signal to noise ratio (SNR). The results of MIMO are compared to a
traditional array antenna.
Abstract: In this paper a novel color image compression
technique for efficient storage and delivery of data is proposed. The
proposed compression technique started by RGB to YCbCr color
transformation process. Secondly, the canny edge detection method is
used to classify the blocks into the edge and non-edge blocks. Each
color component Y, Cb, and Cr compressed by discrete cosine
transform (DCT) process, quantizing and coding step by step using
adaptive arithmetic coding. Our technique is concerned with the
compression ratio, bits per pixel and peak signal to noise ratio, and
produce better results than JPEG and more recent published schemes
(like CBDCT-CABS and MHC). The provided experimental results
illustrate the proposed technique that is efficient and feasible in terms
of compression ratio, bits per pixel and peak signal to noise ratio.
Abstract: A simple adaptive voice activity detector (VAD) is
implemented using Gabor and gammatone atomic decomposition of
speech for high Gaussian noise environments. Matching pursuit is
used for atomic decomposition, and is shown to achieve optimal
speech detection capability at high data compression rates for low
signal to noise ratios. The most active dictionary elements found by
matching pursuit are used for the signal reconstruction so that the
algorithm adapts to the individual speakers dominant time-frequency
characteristics. Speech has a high peak to average ratio enabling
matching pursuit greedy heuristic of highest inner products to isolate
high energy speech components in high noise environments. Gabor
and gammatone atoms are both investigated with identical
logarithmically spaced center frequencies, and similar bandwidths.
The algorithm performs equally well for both Gabor and gammatone
atoms with no significant statistical differences. The algorithm
achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR
and 98% accuracy at a 20dB SNR using 30d B SNR as a reference
for voice activity.
Abstract: In this paper, we regard as a coded transmission over a
frequency-selective channel. We plan to study analytically the
convergence of the turbo-detector using a maximum a posteriori
(MAP) equalizer and a MAP decoder. We demonstrate that the
densities of the maximum likelihood (ML) exchanged during the
iterations are e-symmetric and output-symmetric. Under the Gaussian
approximation, this property allows to execute a one-dimensional
scrutiny of the turbo-detector. By deriving the analytical terminology
of the ML distributions under the Gaussian approximation, we confirm
that the bit error rate (BER) performance of the turbo-detector
converges to the BER performance of the coded additive white
Gaussian noise (AWGN) channel at high signal to noise ratio (SNR),
for any frequency selective channel.
Abstract: Speech enhancement is a long standing problem with
numerous applications like teleconferencing, VoIP, hearing aids and
speech recognition. The motivation behind this research work is to
obtain a clean speech signal of higher quality by applying the optimal
noise cancellation technique. Real-time adaptive filtering algorithms
seem to be the best candidate among all categories of the speech
enhancement methods. In this paper, we propose a speech
enhancement method based on Recursive Least Squares (RLS)
adaptive filter of speech signals. Experiments were performed on
noisy data which was prepared by adding AWGN, Babble and Pink
noise to clean speech samples at -5dB, 0dB, 5dB and 10dB SNR
levels. We then compare the noise cancellation performance of
proposed RLS algorithm with existing NLMS algorithm in terms of
Mean Squared Error (MSE), Signal to Noise ratio (SNR) and SNR
Loss. Based on the performance evaluation, the proposed RLS
algorithm was found to be a better optimal noise cancellation
technique for speech signals.
Abstract: The ad hoc networks are the future of wireless
technology as everyone wants fast and accurate error free information
so keeping this in mind Bit Error Rate (BER) and power is optimized
in this research paper by using the Genetic Algorithm (GA). The
digital modulation techniques used for this paper are Binary Phase
Shift Keying (BPSK), M-ary Phase Shift Keying (M-ary PSK), and
Quadrature Amplitude Modulation (QAM). This work is
implemented on Wireless Ad Hoc Networks (WLAN). Then it is
analyze which modulation technique is performing well to optimize
the BER and power of WLAN.
Abstract: In this paper, Least Mean Square (LMS) adaptive
noise reduction algorithm is proposed to enhance the speech signal
from the noisy speech. In this, the speech signal is enhanced by
varying the step size as the function of the input signal. Objective and
subjective measures are made under various noises for the proposed
and existing algorithms. From the experimental results, it is seen that
the proposed LMS adaptive noise reduction algorithm reduces Mean
square Error (MSE) and Log Spectral Distance (LSD) as compared to
that of the earlier methods under various noise conditions with
different input SNR levels. In addition, the proposed algorithm
increases the Peak Signal to Noise Ratio (PSNR) and Segmental SNR
improvement (ΔSNRseg) values; improves the Mean Opinion Score
(MOS) as compared to that of the various existing LMS adaptive
noise reduction algorithms. From these experimental results, it is
observed that the proposed LMS adaptive noise reduction algorithm
reduces the speech distortion and residual noise as compared to that
of the existing methods.
Abstract: The Quad Tree Decomposition based performance
analysis of compressed image data communication for lossy and
lossless through wireless sensor network is presented. Images have
considerably higher storage requirement than text. While transmitting
a multimedia content there is chance of the packets being dropped
due to noise and interference. At the receiver end the packets that
carry valuable information might be damaged or lost due to noise,
interference and congestion. In order to avoid the valuable
information from being dropped various retransmission schemes have
been proposed. In this proposed scheme QTD is used. QTD is an
image segmentation method that divides the image into homogeneous
areas. In this proposed scheme involves analysis of parameters such
as compression ratio, peak signal to noise ratio, mean square error,
bits per pixel in compressed image and analysis of difficulties during
data packet communication in Wireless Sensor Networks. By
considering the above, this paper is to use the QTD to improve the
compression ratio as well as visual quality and the algorithm in
MATLAB 7.1 and NS2 Simulator software tool.
Abstract: This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.
Abstract: In this paper we propose an algorithm based on
higher order cumulants, for blind impulse response identification
of frequency radio channels and downlink (MC−CDMA) system
Equalization. In order to test its efficiency, we have compared with
another algorithm proposed in the literature, for that we considered
on theoretical channel as the Proakis’s ‘B’ channel and practical
frequency selective fading channel, called Broadband Radio Access
Network (BRAN C), normalized for (MC−CDMA) systems, excited
by non-Gaussian sequences. In the part of (MC−CDMA), we use the
Minimum Mean Square Error (MMSE) equalizer after the channel
identification to correct the channel’s distortion. The simulation
results, in noisy environment and for different signal to noise ratio
(SNR), are presented to illustrate the accuracy of the proposed
algorithm.
Abstract: This paper proposes a hierarchical hidden Markov model (HHMM) to model the detection of M vehicles in a wireless sensor network (WSN). The HHMM model contains an extra level of hidden Markov model to model the temporal transitions of each
state of the first HMM. By modeling the temporal transitions, only those hypothesis with nonzero transition probabilities needs to be tested. Thus, this method efficiently reduces the computation load, which is preferable in WSN applications.This paper integrates several techniques to optimize the detection performance. The output of the states of the first HMM is modeled as Gaussian Mixture Model (GMM), where the number of states and the number of Gaussians are experimentally determined, while the other parameters are estimated using Expectation Maximization (EM). HHMM is used to model the sequence of the local decisions which are based on multiple hypothesis testing with maximum likelihood approach. The states in the HHMM represent various combinations of vehicles of different types. Due to the statistical advantages of multisensor data fusion, we propose a heuristic based on fuzzy weighted majority voting to enhance cooperative classification of moving vehicles within a region that is monitored by a wireless sensor network. A fuzzy inference system weighs each local decision based on the signal to noise
ratio of the acoustic signal for target detection and the signal to noise ratio of the radio signal for sensor communication. The spatial correlation among the observations of neighboring sensor nodes is efficiently utilized as well as the temporal correlation. Simulation results demonstrate the efficiency of this scheme.
Abstract: Determination of optimal conditions of machining parameters is important to reduce the production cost and achieve the desired surface quality. This paper investigates the influence of cutting parameters on surface roughness and natural frequency in turning of aluminum alloy AA2024. The experiments were performed at the lathe machine using two different cutting tools made of AISI 5140 and carbide cutting insert coated with TiC. Turning experiments were planned by Taguchi method L9 orthogonal array.Three levels for spindle speed, feed rate, depth of cut and tool overhang were chosen as cutting variables. The obtained experimental data has been analyzed using signal to noise ratio and analysis of variance. The main effects have been discussed and percentage contributions of various parameters affecting surface roughness and natural frequency, and optimal cutting conditions have been determined. Finally, optimization of the cutting parameters using Taguchi method was verified by confirmation experiments.
Abstract: This paper introduces an image denoising algorithm based on generalized Srivastava-Owa fractional differential operator for removing Gaussian noise in digital images. The structures of nxn fractional masks are constructed by this algorithm. Experiments show that, the capability of the denoising algorithm by fractional differential-based approach appears efficient to smooth the Gaussian noisy images for different noisy levels. The denoising performance is measured by using peak signal to noise ratio (PSNR) for the denoising images. The results showed an improved performance (higher PSNR values) when compared with standard Gaussian smoothing filter.
Abstract: The coherent Self-Averaging (CSA), is a new method proposed in this work; applied to simulated signals evoked potentials related to events (ERP) to find the wave P300, useful systems in the brain computer interface (BCI). The CSA method cleans signal in the time domain of white noise through of successive averaging of a single signal. The method is compared with the traditional method, coherent averaging or synchronized (CA), showing optimal results in the improvement of the signal to noise ratio (SNR). The method of CSA is easy to implement, robust and applicable to any physiological time series contaminated with white noise