Abstract: We address the problem of interference over all the channels in multiuser MIMO-OFDM systems. This paper contributes three beamforming strategies designed for multiuser multiple-input and multiple-output by way of orthogonal frequency division multiplexing, in which the transmit and receive beamformers are acquired repetitious by secure-form stages. In the principal case, the transmit (TX) beamformers remain fixed then the receive (RX) beamformers are computed. This eradicates one interference span for every user by means of extruding the transmit beamformers into a null space of relevant channels. Formerly, by gratifying the orthogonality condition to exclude the residual interferences in RX beamformer for every user is done by maximizing the signal-to-noise ratio (SNR). The second case comprises mutually optimizing the TX and RX beamformers from controlled SNR maximization. The outcomes of first case is used here. The third case also includes combined optimization of TX-RX beamformers; however, uses the both controlled SNR and signal-to-interference-plus-noise ratio maximization (SINR). By the standardized channel model for IEEE 802.11n, the proposed simulation experiments offer rapid beamforming and enhanced error performance.
Abstract: Electric discharge machining (EDM) is one of the most widely used non-conventional manufacturing process to shape difficult-to-cut materials. The process yield, in terms of material removal rate, surface roughness and tool wear rate, of EDM may considerably be improved by selecting the optimal combination(s) of process parameters. This paper employs Multi-response signal-to-noise (MRSN) ratio technique to find the optimal combination(s) of the process parameters during EDM of Inconel 718. Three cases v.i.z. high cutting efficiency, high surface finish, and normal machining have been taken and the optimal combinations of input parameters have been obtained for each case. Analysis of variance (ANOVA) has been employed to find the dominant parameter(s) in all three cases. The experimental verification of the obtained results has also been made. MRSN ratio technique found to be a simple and effective multi-objective optimization technique.
Abstract: Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Abstract: Medical digital images usually have low resolution because of nature of their acquisition. Therefore, this paper focuses on zooming these images to obtain better level of information, required for the purpose of medical diagnosis. For this purpose, a strategy for selecting pixels in zooming operation is proposed. It is based on the principle of analog clock and utilizes a combination of point and neighborhood image processing. In this approach, the hour hand of clock covers the portion of image to be processed. For alignment, the center of clock points at middle pixel of the selected portion of image. The minute hand is longer in length, and is used to gain information about pixels of the surrounding area. This area is called neighborhood pixels region. This information is used to zoom the selected portion of the image. The proposed algorithm is implemented and its performance is evaluated for many medical images obtained from various sources such as X-ray, Computerized Tomography (CT) scan and Magnetic Resonance Imaging (MRI). However, for illustration and simplicity, the results obtained from a CT scanned image of head is presented. The performance of algorithm is evaluated in comparison to various traditional algorithms in terms of Peak signal-to-noise ratio (PSNR), maximum error, SSIM index, mutual information and processing time. From the results, the proposed algorithm is found to give better performance than traditional algorithms.
Abstract: With the rapid development of computer technology,
the design of computers and keyboards moves towards a trend of
slimness. The change of mobile input devices directly influences
users’ behavior. Although multi-touch applications allow entering
texts through a virtual keyboard, the performance, feedback, and
comfortableness of the technology is inferior to traditional keyboard,
and while manufacturers launch mobile touch keyboards and
projection keyboards, the performance has not been satisfying.
Therefore, this study discussed the design factors of slim
pressure-sensitive keyboards. The factors were evaluated with an
objective (accuracy and speed) and a subjective evaluation
(operability, recognition, feedback, and difficulty) depending on the
shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and
6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard.
Moreover, MANOVA and Taguchi methods (regarding
signal-to-noise ratios) were conducted to find the optimal level of each
design factor. The research participants, by their typing speed (30
words/ minute), were divided in two groups. Considering the
multitude of variables and levels, the experiments were implemented
using the fractional factorial design. A representative model of the
research samples were established for input task testing. The findings
of this study showed that participants with low typing speed primarily
relied on vision to recognize the keys, and those with high typing
speed relied on tactile feedback that was affected by the thickness and
force of the keys. In the objective and subjective evaluation, a
combination of keyboard design factors that might result in higher
performance and satisfaction was identified (L-shaped, 3mm, and
60±10g) as the optimal combination. The learning curve was analyzed
to make a comparison with a traditional standard keyboard to
investigate the influence of user experience on keyboard operation.
The research results indicated the optimal combination provided input
performance to inferior to a standard keyboard. The results could serve
as a reference for the development of related products in industry and
for applying comprehensively to touch devices and input interfaces
which are interacted with people.
Abstract: This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.
Abstract: Digital images are widely used in computer
applications. To store or transmit the uncompressed images
requires considerable storage capacity and transmission bandwidth.
Image compression is a means to perform transmission or storage of
visual data in the most economical way. This paper explains about
how images can be encoded to be transmitted in a multiplexing
time-frequency domain channel. Multiplexing involves packing
signals together whose representations are compact in the working
domain. In order to optimize transmission resources each 4 × 4
pixel block of the image is transformed by a suitable polynomial
approximation, into a minimal number of coefficients. Less than
4 × 4 coefficients in one block spares a significant amount of
transmitted information, but some information is lost. Different
approximations for image transformation have been evaluated as
polynomial representation (Vandermonde matrix), least squares +
gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev
polynomials or singular value decomposition (SVD). Results have
been compared in terms of nominal compression rate (NCR),
compression ratio (CR) and peak signal-to-noise ratio (PSNR)
in order to minimize the error function defined as the difference
between the original pixel gray levels and the approximated
polynomial output. Polynomial coefficients have been later encoded
and handled for generating chirps in a target rate of about two
chirps per 4 × 4 pixel block and then submitted to a transmission
multiplexing operation in the time-frequency domain.
Abstract: Chatter vibrations, occurring during cutting process,
cause vibration between the cutting tool and workpiece, which
deteriorates surface roughness and reduces tool life. The purpose of
this study is to investigate the influence of cutting parameters and
tool construction on surface roughness and vibration in turning of
aluminum alloy AA2024. A new design of cutting tool is proposed,
which is filled up with epoxy granite in order to improve damping
capacity of the tool. Experiments were performed at the lathe using
carbide cutting insert coated with TiC and two different cutting tools
made of AISI 5140 steel. Taguchi L9 orthogonal array was applied to
design of experiment and to optimize cutting conditions. By the help
of signal-to-noise ratio and analysis of variance the optimal cutting
condition and the effect of the cutting parameters on surface
roughness and vibration were determined. Effectiveness of Taguchi
method was verified by confirmation test. It was revealed that new
cutting tool with epoxy granite has reduced vibration and surface
roughness due to high damping properties of epoxy granite in
toolholder.
Abstract: The Adaptive Line Enhancer (ALE) is widely used for
enhancing narrowband signals corrupted by broadband noise. In this
paper, we propose novel ALE methods to improve the enhancing
capability. The proposed methods are motivated by the fact that the
output of the ALE is a fine estimate of the desired narrowband signal
with the broadband noise component suppressed. The proposed
methods preprocess the input signal using ALE filter to regenerate a
finer input signal. Thus the proposed ALE is driven by the input signal
with higher signal-to-noise ratio (SNR). The analysis and simulation
results are presented to demonstrate that the proposed ALE has better
performance than conventional ALE’s.
Abstract: The present work analyses different parameters of end
milling to minimize the surface roughness for AISI D2 steel. D2 Steel
is generally used for stamping or forming dies, punches, forming
rolls, knives, slitters, shear blades, tools, scrap choppers, tyre
shredders etc. Surface roughness is one of the main indices that
determines the quality of machined products and is influenced by
various cutting parameters. In machining operations, achieving
desired surface quality by optimization of machining parameters, is a
challenging job. In case of mating components the surface roughness
become more essential and is influenced by the cutting parameters,
because, these quality structures are highly correlated and are
expected to be influenced directly or indirectly by the direct effect of
process parameters or their interactive effects (i.e. on process
environment). In this work, the effects of selected process parameters
on surface roughness and subsequent setting of parameters with the
levels have been accomplished by Taguchi’s parameter design
approach. The experiments have been performed as per the
combination of levels of different process parameters suggested by
L9 orthogonal array. Experimental investigation of the end milling of
AISI D2 steel with carbide tool by varying feed, speed and depth of
cut and the surface roughness has been measured using surface
roughness tester. Analyses of variance have been performed for mean
and signal-to-noise ratio to estimate the contribution of the different
process parameters on the process.
Abstract: Microarray technology is universally used in the study
of disease diagnosis using gene expression levels. The main
shortcoming of gene expression data is that it includes thousands of
genes and a small number of samples. Abundant methods and
techniques have been proposed for tumor classification using
microarray gene expression data. Feature or gene selection methods
can be used to mine the genes that directly involve in the
classification and to eliminate irrelevant genes. In this paper
statistical measures like T-Statistics, Signal-to-Noise Ratio (SNR)
and F-Statistics are used to rank the genes. The ranked genes are used
for further classification. Particle Swarm Optimization (PSO)
algorithm and Shuffled Frog Leaping (SFL) algorithm are used to
find the significant genes from the top-m ranked genes. The Naïve
Bayes Classifier (NBC) is used to classify the samples based on the
significant genes. The proposed work is applied on Lung and Ovarian
datasets. The experimental results show that the proposed method
achieves 100% accuracy in all the three datasets and the results are
compared with previous works.
Abstract: Different order modulations combined with different
coding schemes, allow sending more bits per symbol, thus achieving
higher throughputs and better spectral efficiencies. However, it must
also be noted that when using a modulation technique such as 64-
QAM with less overhead bits, better signal-to-noise ratios (SNRs) are
needed to overcome any Inter symbol Interference (ISI) and maintain
a certain bit error ratio (BER). The use of adaptive modulation allows
wireless technologies to yielding higher throughputs while also
covering long distances. The aim of this paper is to implement an
Adaptive Modulation and Coding (AMC) features of the WiMAX
PHY in MATLAB and to analyze the performance of the system in
different channel conditions (AWGN, Rayleigh and Rician fading
channel) with channel estimation and blind equalization. Simulation
results have demonstrated that the increment in modulation order
causes to increment in throughput and BER values. These results
derived a trade-off among modulation order, FFT length, throughput,
BER value and spectral efficiency. The BER changes gradually for
AWGN channel and arbitrarily for Rayleigh and Rician fade
channels.
Abstract: Cooperative communication provides transmit diversity, even when, due to size constraints, mobile units cannot accommodate multiple antennas. A versatile cooperation method called coded cooperation has been developed, in which cooperation is implemented through channel coding with a view to controlling the errors inherent in wireless communication. In this work we evaluate the performance of coded cooperation in flat Rayleigh fading environment using a concept known as the pair wise error probability (PEP). We derive the PEP for a flat fading scenario in coded cooperation and then compare with the signal-to-noise ratio of the users in the network. Results show that an increase in the SNR leads to a decrease in the PEP. We also carried out simulations to validate the result.
Abstract: Information in the nervous system is coded as firing patterns of electrical signals called action potential or spike so an essential step in analysis of neural mechanism is detection of action potentials embedded in the neural data. There are several methods proposed in the literature for such a purpose. In this paper a novel method based on empirical mode decomposition (EMD) has been developed. EMD is a decomposition method that extracts oscillations with different frequency range in a waveform. The method is adaptive and no a-priori knowledge about data or parameter adjusting is needed in it. The results for simulated data indicate that proposed method is comparable with wavelet based methods for spike detection. For neural signals with signal-to-noise ratio near 3 proposed methods is capable to detect more than 95% of action potentials accurately.
Abstract: There is a growing interest in the use of ultrasonic speckle tracking for biomedical image formation of tissue deformation. Speckle tracking is angle independent and has an ability to differentiate soft tissue into benign and malignant regions. In this paper a simulation model for dynamic ultrasound scatterer is presented. The model composes Field-II ultrasonic scatterers and FEM (ANSYS-11) nodes as a regional tissue deformation. A performance evaluation is presented on axial displacement and strain fields estimation of a uniformly elastic model, using speckle tracking based 1D cross-correlation of optimally segmented pre and post-deformation frames. Optimum correlation window length is investigated in terms of highest signal-to-noise ratio (SNR) for a selected region of interest of a smoothed displacement field. Finally, gradient based strain field of both smoothed and non-smoothed displacement fields are compared. Simulation results from the model are shown to compare favorably with FEM results.
Abstract: We present a hardware oriented method for real-time
measurements of object-s position in video. The targeted application
area is light spots used as references for robotic navigation. Different
algorithms for dynamic thresholding are explored in combination
with component labeling and Center Of Gravity (COG) for highest
possible precision versus Signal-to-Noise Ratio (SNR). This method
was developed with a low hardware cost in focus having only one
convolution operation required for preprocessing of data.
Abstract: In this work we present a solution for DAGC (Digital
Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4
GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used
enables gain control over Low Noise Amplifier (LNA) and a
Variable Gain Amplifier (VGA). The control over those signals is
performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better
signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the
average power of the baseband signal close to the desired set point.
DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and
actual gain setting, adjusting a gain factor of the accumulation, and
applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying
the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the
DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.
Abstract: Removing noise from the any processed images is very important. Noise should be removed in such a way that important information of image should be preserved. A decisionbased nonlinear algorithm for elimination of band lines, drop lines, mark, band lost and impulses in images is presented in this paper. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and evaluation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. However, the restricted window size renders median operation less effective whenever noise is excessive in that case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of Mean Square Error [MSE], Peak-Signal-to-Noise Ratio [PSNR], Signal-to-Noise Ratio Improved [SNRI], Percentage Of Noise Attenuated [PONA], and Percentage Of Spoiled Pixels [POSP]. This is compared with standard algorithms already in use and improved performance of the proposed algorithm is presented. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms which are required for removal of different artifacts.
Abstract: This paper presents an evaluation for a wavelet-based
digital watermarking technique used in estimating the quality of
video sequences transmitted over Additive White Gaussian Noise
(AWGN) channel in terms of a classical objective metric, such as
Peak Signal-to-Noise Ratio (PSNR) without the need of the original
video. In this method, a watermark is embedded into the Discrete
Wavelet Transform (DWT) domain of the original video frames
using a quantization method. The degradation of the extracted
watermark can be used to estimate the video quality in terms of
PSNR with good accuracy. We calculated PSNR for video frames
contaminated with AWGN and compared the values with those
estimated using the Watermarking-DWT based approach. It is found
that the calculated and estimated quality measures of the video
frames are highly correlated, suggesting that this method can provide
a good quality measure for video frames transmitted over AWGN
channel without the need of the original video.
Abstract: For about two decades scientists have been
developing techniques for enhancing the quality of medical images
using Fourier transform, DWT (Discrete wavelet transform),PDE
model etc., Gabor wavelet on hexagonal sampled grid of the images
is proposed in this work. This method has optimal approximation
theoretic performances, for a good quality image. The computational
cost is considerably low when compared to similar processing in the
rectangular domain. As X-ray images contain light scattered pixels,
instead of unique sigma, the parameter sigma of 0.5 to 3 is found to
satisfy most of the image interpolation requirements in terms of high
Peak Signal-to-Noise Ratio (PSNR) , lower Mean Squared Error
(MSE) and better image quality by adopting windowing technique.