Abstract: In this paper we propose a method which improves the efficiency of video coding. Our method combines an adaptive GOP (group of pictures) structure and the shot cut detection. We have analyzed different approaches for shot cut detection with aim to choose the most appropriate one. The next step is to situate N frames to the positions of detected cuts during the process of video encoding. Finally the efficiency of the proposed method is confirmed by simulations and the obtained results are compared with fixed GOP structures of sizes 4, 8, 12, 16, 32, 64, 128 and GOP structure with length of entire video. Proposed method achieved the gain in bit rate from 0.37% to 50.59%, while providing PSNR (Peak Signal-to-Noise Ratio) gain from 1.33% to 0.26% in comparison to simulated fixed GOP structures.
Abstract: Tumor classification is a key area of research in the
field of bioinformatics. Microarray technology is commonly used in
the study of disease diagnosis using gene expression levels. The
main drawback of gene expression data is that it contains thousands
of genes and a very few samples. Feature selection methods are used
to select the informative genes from the microarray. These methods
considerably improve the classification accuracy. In the proposed
method, Genetic Algorithm (GA) is used for effective feature
selection. Informative genes are identified based on the T-Statistics,
Signal-to-Noise Ratio (SNR) and F-Test values. The initial candidate
solutions of GA are obtained from top-m informative genes. The
classification accuracy of k-Nearest Neighbor (kNN) method is used
as the fitness function for GA. In this work, kNN and Support Vector
Machine (SVM) are used as the classifiers. The experimental results
show that the proposed work is suitable for effective feature
selection. With the help of the selected genes, GA-kNN method
achieves 100% accuracy in 4 datasets and GA-SVM method
achieves in 5 out of 10 datasets. The GA with kNN and SVM
methods are demonstrated to be an accurate method for microarray
based tumor classification.
Abstract: Photoplethysmography is a simple measurement of the
variation in blood volume in tissue. It detects the pulse signal of heart
beat as well as the low frequency signal of vasoconstriction and
vasodilation. The transmission type measurement is limited to only a
few specific positions for example the index finger that have a short
path length for light. The reflectance type measurement can be
conveniently applied on most parts of the body surface. This study
analyzed the factors that determine the quality of reflectance
photoplethysmograph signal including the emitter-detector distance,
wavelength, light intensity, and optical properties of skin tissue.
Light emitting diodes (LEDs) with four different visible
wavelengths were used as the light emitters. A phototransistor was
used as the light detector. A micro translation stage adjusts the
emitter-detector distance from 2 mm to 15 mm.
The reflective photoplethysmograph signals were measured on
different sites. The optimal emitter-detector distance was chosen to
have a large dynamic range for low frequency drifting without signal
saturation and a high perfusion index. Among these four wavelengths,
a yellowish green (571nm) light with a proper emitter-detection
distance of 2mm is the most suitable for obtaining a steady and reliable
reflectance photoplethysmograph signal
Abstract: In wavelet regression, choosing threshold value is a crucial issue. A too large value cuts too many coefficients resulting in over smoothing. Conversely, a too small threshold value allows many coefficients to be included in reconstruction, giving a wiggly estimate which result in under smoothing. However, the proper choice of threshold can be considered as a careful balance of these principles. This paper gives a very brief introduction to some thresholding selection methods. These methods include: Universal, Sure, Ebays, Two fold cross validation and level dependent cross validation. A simulation study on a variety of sample sizes, test functions, signal-to-noise ratios is conducted to compare their numerical performances using three different noise structures. For Gaussian noise, EBayes outperforms in all cases for all used functions while Two fold cross validation provides the best results in the case of long tail noise. For large values of signal-to-noise ratios, level dependent cross validation works well under correlated noises case. As expected, increasing both sample size and level of signal to noise ratio, increases estimation efficiency.
Abstract: To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Abstract: This paper presents a study of the Taguchi design
application to optimize surface quality in damper inserted end milling
operation. Maintaining good surface quality usually involves
additional manufacturing cost or loss of productivity. The Taguchi
design is an efficient and effective experimental method in which a
response variable can be optimized, given various factors, using
fewer resources than a factorial design. This Study included spindle
speed, feed rate, and depth of cut as control factors, usage of different
tools in the same specification, which introduced tool condition and
dimensional variability. An orthogonal array of L9(3^4)was used;
ANOVA analyses were carried out to identify the significant factors
affecting surface roughness, and the optimal cutting combination was
determined by seeking the best surface roughness (response) and
signal-to-noise ratio. Finally, confirmation tests verified that the
Taguchi design was successful in optimizing milling parameters for
surface roughness.
Abstract: The paper presents the study of synthetic transmit
aperture method applying the Golay coded transmission for medical
ultrasound imaging. Longer coded excitation allows to increase the
total energy of the transmitted signal without increasing the peak
pressure. Signal-to-noise ratio and penetration depth are improved
maintaining high ultrasound image resolution.
In the work the 128-element linear transducer array with 0.3 mm
inter-element spacing excited by one cycle and the 8 and 16-bit
Golay coded sequences at nominal frequencies 4 MHz was used.
Single element transmission aperture was used to generate a spherical
wave covering the full image region and all the elements received the
echo signals. The comparison of 2D ultrasound images of the wire
phantom as well as of the tissue mimicking phantom is presented to
demonstrate the benefits of the coded transmission. The results were
obtained using the synthetic aperture algorithm with transmit and
receive signals correction based on a single element directivity
function.
Abstract: In diversity rich environments, such as in Ultra-
Wideband (UWB) applications, the a priori determination of the
number of strong diversity branches is difficult, because of the considerably large number of diversity paths, which are characterized
by a variety of power delay profiles (PDPs). Several
Rake implementations have been proposed in the past, in order to reduce the number of the estimated and combined paths. To this
aim, we introduce two adaptive Rake receivers, which combine
a subset of the resolvable paths considering simultaneously the
quality of both the total combining output signal-to-noise ratio (SNR) and the individual SNR of each path. These schemes achieve
better adaptation to channel conditions compared to other known receivers, without further increasing the complexity. Their performance
is evaluated in different practical UWB channels, whose models are based on extensive propagation measurements. The
proposed receivers compromise between the power consumption,
complexity and performance gain for the additional paths, resulting in important savings in power and computational resources.
Abstract: In this study, a classification-based video
super-resolution method using artificial neural network (ANN) is
proposed to enhance low-resolution (LR) to high-resolution (HR)
frames. The proposed method consists of four main steps:
classification, motion-trace volume collection, temporal adjustment,
and ANN prediction. A classifier is designed based on the edge
properties of a pixel in the LR frame to identify the spatial information.
To exploit the spatio-temporal information, a motion-trace volume is
collected using motion estimation, which can eliminate unfathomable
object motion in the LR frames. In addition, temporal lateral process is
employed for volume adjustment to reduce unnecessary temporal
features. Finally, ANN is applied to each class to learn the complicated
spatio-temporal relationship between LR and HR frames. Simulation
results show that the proposed method successfully improves both
peak signal-to-noise ratio and perceptual quality.
Abstract: In this paper, a residue number arithmetic is used in
direct sequence spread spectrum system, this system is evaluated and
the bit error probability of this system is compared to that of non
residue number system. The effect of channel bandwidth, PN
sequences, multipath effect and modulation scheme are studied. A
Matlab program is developed to measure the signal-to-noise ratio
(SNR), and the bit error probability for the various schemes.
Abstract: Automated operations based on voice commands will become more and more important in many applications, including robotics, maintenance operations, etc. However, voice command recognition rates drop quite a lot under non-stationary and chaotic noise environments. In this paper, we tried to significantly improve the speech recognition rates under non-stationary noise environments. First, 298 Navy acronyms have been selected for automatic speech recognition. Data sets were collected under 4 types of noisy environments: factory, buccaneer jet, babble noise in a canteen, and destroyer. Within each noisy environment, 4 levels (5 dB, 15 dB, 25 dB, and clean) of Signal-to-Noise Ratio (SNR) were introduced to corrupt the speech. Second, a new algorithm to estimate speech or no speech regions has been developed, implemented, and evaluated. Third, extensive simulations were carried out. It was found that the combination of the new algorithm, the proper selection of language model and a customized training of the speech recognizer based on clean speech yielded very high recognition rates, which are between 80% and 90% for the four different noisy conditions. Fourth, extensive comparative studies have also been carried out.
Abstract: The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.
Abstract: A special case of floating point data representation is block
floating point format where a block of operands are forced to have a joint
exponent term. This paper deals with the finite wordlength properties of
this data format. The theoretical errors associated with the error model for
block floating point quantization process is investigated with the help of error
distribution functions. A fast and easy approximation formula for calculating
signal-to-noise ratio in quantization to block floating point format is derived.
This representation is found to be a useful compromise between fixed point
and floating point format due to its acceptable numerical error properties over
a wide dynamic range.
Abstract: This paper presented two new efficient algorithms
for contour approximation. The proposed algorithm is compared
with Ramer (good quality), Triangle (faster) and Trapezoid (fastest)
in this work; which are briefly described. Cartesian co-ordinates of
an input contour are processed in such a manner that finally
contours is presented by a set of selected vertices of the edge of the
contour. In the paper the main idea of the analyzed procedures for
contour compression is performed. For comparison, the mean
square error and signal-to-noise ratio criterions are used.
Computational time of analyzed methods is estimated depending on
a number of numerical operations. Experimental results are
obtained both in terms of image quality, compression ratios, and
speed. The main advantages of the analyzed algorithm is small
numbers of the arithmetic operations compared to the existing
algorithms.
Abstract: An array antenna system with innovative signal
processing can improve the resolution of a source direction of arrival
(DoA) estimation. High resolution techniques take the advantage of
array antenna structures to better process the incoming waves. They
also have the capability to identify the direction of multiple targets.
This paper investigates performance of the DOA estimation
algorithm namely; Capon and MUSIC on the uniform linear array
(ULA). The simulation results show that in Capon and MUSIC
algorithm the resolution of the DOA techniques improves as number
of snapshots, number of array elements, signal-to-noise ratio and
separation angle between the two sources θ increases.
Abstract: This paper presents an investigation of the power
penalties imposed by four-wave mixing (FWM) on G.652 (Single-
Mode Fiber - SMF), G.653 (Dispersion-Shifted Fiber - DSF), and
G.655 (Non-Zero Dispersion-Shifted Fiber - NZDSF) compliant
fibers, considering the DWDM grids suggested by the ITU-T
Recommendations G.692, and G.694.1, with uniform channel
spacing of 100, 50, 25, and 12.5 GHz. The mathematical/numerical
model assumes undepleted pumping, and shows very clearly the
deleterious effect of FWM on the performance of DWDM systems,
measured by the signal-to-noise ratio (SNR). The results make it
evident that non-uniform channel spacing is practically mandatory
for WDM systems based on DSF fibers.
Abstract: In this paper we propose a family of algorithms based
on 3rd and 4th order cumulants for blind single-input single-output
(SISO) Non-Minimum Phase (NMP) Finite Impulse Response (FIR)
channel estimation driven by non-Gaussian signal. The input signal
represents the signal used in 10GBASE-T (or IEEE 802.3an-2006)
as a Tomlinson-Harashima Precoded (THP) version of random
Pulse-Amplitude Modulation with 16 discrete levels (PAM-16). The
proposed algorithms are tested using three non-minimum phase
channel for different Signal-to-Noise Ratios (SNR) and for different
data input length. Numerical simulation results are presented to
illustrate the performance of the proposed algorithms.
Abstract: The aim of this research is to evaluate surface
roughness and develop a multiple regression model for surface roughness as a function of cutting parameters during the turning of
flame hardened medium carbon steel with TiN-Al2O3-TiCN coated inserts. An experimental plan of work and signal-to-noise ratio (S/N)
were used to relate the influence of turning parameters to the
workpiece surface finish utilizing Taguchi methodology. The effects
of turning parameters were studied by using the analysis of variance (ANOVA) method. Evaluated parameters were feed, cutting speed,
and depth of cut. It was found that the most significant interaction among the considered turning parameters was between depth of cut and feed. The average surface roughness (Ra) resulted by TiN-Al2O3-
TiCN coated inserts was about 2.44 μm and minimum value was 0.74 μm. In addition, the regression model was able to predict values for surface roughness in comparison with experimental values within
reasonable limit.
Abstract: In this paper the problem of estimating the time delay
between two spatially separated noisy sinusoidal signals by system
identification modeling is addressed. The system is assumed to be
perturbed by both input and output additive white Gaussian noise. The
presence of input noise introduces bias in the time delay estimates.
Normally the solution requires a priori knowledge of the input-output
noise variance ratio. We utilize the cascade of a self-tuned filter with
the time delay estimator, thus making the delay estimates robust to
input noise. Simulation results are presented to confirm the superiority
of the proposed approach at low input signal-to-noise ratios.
Abstract: In this paper we present an enhanced noise reduction method for robust speech recognition using Adaptive Gain Equalizer with Non linear Spectral Subtraction. In Adaptive Gain Equalizer method (AGE), the input signal is divided into a number of subbands that are individually weighed in time domain, in accordance to the short time Signal-to-Noise Ratio (SNR) in each subband estimation at every time instant. Instead of focusing on suppression the noise on speech enhancement is focused. When analysis was done under various noise conditions for speech recognition, it was found that Adaptive Gain Equalizer method algorithm has an obvious failing point for a SNR of -5 dB, with inadequate levels of noise suppression for SNR less than this point. This work proposes the implementation of AGE when coupled with Non linear Spectral Subtraction (AGE-NSS) for robust speech recognition. The experimental result shows that out AGE-NSS performs the AGE when SNR drops below -5db level.