Abstract: One of the essential components of much of DSP
application is noise cancellation. Changes in real time signals are
quite rapid and swift. In noise cancellation, a reference signal which
is an approximation of noise signal (that corrupts the original
information signal) is obtained and then subtracted from the noise
bearing signal to obtain a noise free signal. This approximation of
noise signal is obtained through adaptive filters which are self
adjusting. As the changes in real time signals are abrupt, this needs
adaptive algorithm that converges fast and is stable. Least mean
square (LMS) and normalized LMS (NLMS) are two widely used
algorithms because of their plainness in calculations and
implementation. But their convergence rates are small. Adaptive
averaging filters (AFA) are also used because they have high
convergence, but they are less stable. This paper provides the
comparative study of LMS and Normalized NLMS, AFA and new
enhanced average adaptive (Average NLMS-ANLMS) filters for noise
cancelling application using speech signals.
Abstract: In general dynamic analyses, lower mode response is
of interest, however the higher modes of spatially discretized
equations generally do not represent the real behavior and not affects
to global response much. Some implicit algorithms, therefore, are
introduced to filter out the high-frequency modes using intended
numerical error. The objective of this study is to introduce the
P-method and PC α-method to compare that with dissipation method
and Newmark method through the stability analysis and numerical
example. PC α-method gives more accuracy than other methods
because it based on the α-method inherits the superior properties of the
implicit α-method. In finite element analysis, the PC α-method is more
useful than other methods because it is the explicit scheme and it
achieves the second order accuracy and numerical damping
simultaneously.
Abstract: In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.
Abstract: The ideal sinc filter, ignoring the noise statistics, is often
applied for generating an arbitrary sample of a bandlimited signal by
using the uniformly sampled data. In this article, an optimal interpolator is proposed; it reaches a minimum mean square error (MMSE)
at its output in the presence of noise. The resulting interpolator is
thus a Wiener filter, and both the optimal infinite impulse response
(IIR) and finite impulse response (FIR) filters are presented. The
mean square errors (MSE-s) for the interpolator of different length
impulse responses are obtained by computer simulations; it shows that
the MSE-s of the proposed interpolators with a reasonable length are
improved about 0.4 dB under flat power spectra in noisy environment with signal-to-noise power ratio (SNR) equal 10 dB. As expected,
the results also demonstrate the improvements for the MSE-s with various fractional delays of the optimal interpolator against the ideal
sinc filter under a fixed length impulse response.
Abstract: The perfect operation of common Active Filters is depended on accuracy of identification system distortion. Also, using a suitable method in current injection and reactive power compensation, leads to increased filter performance. Due to this fact, this paper presents a method based on predictive current control theory in shunt active filter applications. The harmonics of the load current is identified by using o–d–q reference frame on load current and eliminating the DC part of d–q components. Then, the rest of these components deliver to predictive current controller as a Threephase reference current by using Park inverse transformation. System is modeled in discreet time domain. The proposed method has been tested using MATLAB model for a nonlinear load (with Total Harmonic Distortion=20%). The simulation results indicate that the proposed filter leads to flowing a sinusoidal current (THD=0.15%) through the source. In addition, the results show that the filter tracks the reference current accurately.
Abstract: Image restoration involves elimination of noise. Filtering techniques were adopted so far to restore images since last five decades. In this paper, we consider the problem of image restoration degraded by a blur function and corrupted by random noise. A method for reducing additive noise in images by explicit analysis of local image statistics is introduced and compared to other noise reduction methods. The proposed method, which makes use of an a priori noise model, has been evaluated on various types of images. Bayesian based algorithms and technique of image processing have been described and substantiated with experimentation using MATLAB.
Abstract: Human amniotic membrane (HAM) is a useful
biological material for the reconstruction of damaged ocular surface.
The processing and preservation of HAM is critical to prevent the
patients undergoing amniotic membrane transplant (AMT) from cross
infections. For HAM preparation human placenta is obtained after an
elective cesarean delivery. Before collection, the donor is screened
for seronegativity of HCV, Hbs Ag, HIV and Syphilis. After
collection, placenta is washed in balanced salt solution (BSS) in
sterile environment. Amniotic membrane is then separated from the
placenta as well as chorion while keeping the preparation in BSS.
Scrapping of HAM is then carried out manually until all the debris is
removed and clear transparent membrane is acquired. Nitrocellulose
membrane filters are then placed on the stromal side of HAM, cut
around the edges with little membrane folded towards other side
making it easy to separate during surgery. HAM is finally stored in
solution of glycerine and Dulbecco-s Modified Eagle Medium
(DMEM) in 1:1 ratio containing antibiotics. The capped borosil vials
containing HAM are kept at -80°C until use. This vial is thawed to
room temperature and opened under sterile operation theatre
conditions at the time of surgery.
Abstract: Many factors affect the success of Machine Learning
(ML) on a given task. The representation and quality of the instance
data is first and foremost. If there is much irrelevant and redundant
information present or noisy and unreliable data, then knowledge
discovery during the training phase is more difficult. It is well known
that data preparation and filtering steps take considerable amount of
processing time in ML problems. Data pre-processing includes data
cleaning, normalization, transformation, feature extraction and
selection, etc. The product of data pre-processing is the final training
set. It would be nice if a single sequence of data pre-processing
algorithms had the best performance for each data set but this is not
happened. Thus, we present the most well know algorithms for each
step of data pre-processing so that one achieves the best performance
for their data set.
Abstract: In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.
Abstract: Nowadays, we are facing with network threats that
cause enormous damage to the Internet community day by day. In
this situation, more and more people try to prevent their network
security using some traditional mechanisms including firewall,
Intrusion Detection System, etc. Among them honeypot is a versatile
tool for a security practitioner, of course, they are tools that are meant
to be attacked or interacted with to more information about attackers,
their motives and tools. In this paper, we will describe usefulness of
low-interaction honeypot and high-interaction honeypot and
comparison between them. And then we propose hybrid honeypot
architecture that combines low and high -interaction honeypot to
mitigate the drawback. In this architecture, low-interaction honeypot
is used as a traffic filter. Activities like port scanning can be
effectively detected by low-interaction honeypot and stop there.
Traffic that cannot be handled by low-interaction honeypot is handed
over to high-interaction honeypot. In this case, low-interaction
honeypot is used as proxy whereas high-interaction honeypot offers
the optimal level realism. To prevent the high-interaction honeypot
from infections, containment environment (VMware) is used.
Abstract: Mostly the systems are dealing with time varying
signals. The Power efficiency can be achieved by adapting the system
activity according to the input signal variations. In this context
an adaptive rate filtering technique, based on the level crossing sampling
is devised. It adapts the sampling frequency and the filter order
by following the input signal local variations. Thus, it correlates the
processing activity with the signal variations. Interpolation is required
in the proposed technique. A drastic reduction in the interpolation
error is achieved by employing the symmetry during the interpolation
process. Processing error of the proposed technique is
calculated. The computational complexity of the proposed filtering
technique is deduced and compared to the classical one. Results
promise a significant gain of the computational efficiency and hence
of the power consumption.
Abstract: In this paper a simple watermarking method for
color images is proposed. The proposed method is based on
watermark embedding for the histograms of the HSV planes
using visual cryptography watermarking. The method has
been proved to be robust for various image processing
operations such as filtering, compression, additive noise, and
various geometrical attacks such as rotation, scaling, cropping,
flipping, and shearing.
Abstract: The aim of this research is to develop a fast and
reliable surveillance system based on a personal digital assistant
(PDA) device. This is to extend the capability of the device to detect
moving objects which is already available in personal computers.
Secondly, to compare the performance between Background
subtraction (BS) and Temporal Frame Differencing (TFD) techniques
for PDA platform as to which is more suitable. In order to reduce
noise and to prepare frames for the moving object detection part,
each frame is first converted to a gray-scale representation and then
smoothed using a Gaussian low pass filter. Two moving object
detection schemes i.e., BS and TFD have been analyzed. The
background frame is updated by using Infinite Impulse Response
(IIR) filter so that the background frame is adapted to the varying
illuminate conditions and geometry settings. In order to reduce the
effect of noise pixels resulting from frame differencing
morphological filters erosion and dilation are applied. In this
research, it has been found that TFD technique is more suitable for
motion detection purpose than the BS in term of speed. On average
TFD is approximately 170 ms faster than the BS technique
Abstract: Brain Computer Interface (BCI) has been recently
increased in research. Functional Near Infrared Spectroscope (fNIRs)
is one the latest technologies which utilize light in the near-infrared
range to determine brain activities. Because near infrared technology
allows design of safe, portable, wearable, non-invasive and wireless
qualities monitoring systems, fNIRs monitoring of brain
hemodynamics can be value in helping to understand brain tasks. In
this paper, we present results of fNIRs signal analysis indicating that
there exist distinct patterns of hemodynamic responses which
recognize brain tasks toward developing a BCI. We applied two
different mathematics tools separately, Wavelets analysis for
preprocessing as signal filters and feature extractions and Neural
networks for cognition brain tasks as a classification module. We
also discuss and compare with other methods while our proposals
perform better with an average accuracy of 99.9% for classification.
Abstract: A new fast correlation algorithm for calibrating the
wavelength of Optical Spectrum Analyzers (OSAs) was introduced
in [1]. The minima of acetylene gas spectra were measured and
correlated with saved theoretical data [2]. So it is possible to find the
correct wavelength calibration data using a noisy reference spectrum.
First tests showed good algorithmic performance for gas line spectra
with high noise. In this article extensive performance tests were made
to validate the noise resistance of this algorithm. The filter and
correlation parameters of the algorithm were optimized for improved
noise performance. With these parameters the performance of this
wavelength calibration was simulated to predict the resulting
wavelength error in real OSA systems. Long term simulations were
made to evaluate the performance of the algorithm over the lifetime
of a real OSA.
Abstract: Speckled images arise when coherent microwave,
optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar
systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted
by speckle noise is complicated by the nature of the noise and is not
as straightforward as detection and estimation in additive noise. In
this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The
motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this
context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series
of Laguerre weighted exponential functions, resulting in a doubly
stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form.
It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an
exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.
Abstract: This paper describes a new method for extracting the fetal heart rate (fHR) and the fetal heart rate variability (fHRV) signal non-invasively using abdominal maternal electrocardiogram (mECG) recordings. The extraction is based on the fundamental frequency (Fourier-s) theorem. The fundamental frequency of the mother-s electrocardiogram signal (fo-m) is calculated directly from the abdominal signal. The heart rate of the fetus is usually higher than that of the mother; as a result, the fundamental frequency of the fetal-s electrocardiogram signal (fo-f) is higher than that of the mother-s (fo-f > fo-m). Notch filters to suppress mother-s higher harmonics were designed; then a bandpass filter to target fo-f and reject fo-m is implemented. Although the bandpass filter will pass some other frequencies (harmonics), we have shown in this study that those harmonics are actually carried on fo-f, and thus have no impact on the evaluation of the beat-to-beat changes (RR intervals). The oscillations of the time-domain extracted signal represent the RR intervals. We have also shown in this study that zero-to-zero evaluation of the periods is more accurate than the peak-to-peak evaluation. This method is evaluated both on simulated signals and on different abdominal recordings obtained at different gestational ages.
Abstract: This paper addresses the design of predictive
networked controller with adaptation of a communication delay. The
networked control system contains random delays from sensor to
controller and from controller to actuator. The proposed predictive
controller includes an adaptation loop which decreases the influence
of communication delay on the control performance. Also, the
predictive controller contains a filter which improves the robustness
of the control system. The performance of the proposed adaptive
predictive controller is demonstrated by simulation results in
comparison with PI controller and predictive controller with constant
delay.
Abstract: This paper deals with the optimal design of two-channel recursive parallelogram quadrature mirror filter (PQMF) banks. The analysis and synthesis filters of the PQMF bank are composed of two-dimensional (2-D) recursive digital all-pass filters (DAFs) with nonsymmetric half-plane (NSHP) support region. The design problem can be facilitated by using the 2-D doubly complementary half-band (DC-HB) property possessed by the analysis and synthesis filters. For finding the coefficients of the 2-D recursive NSHP DAFs, we appropriately formulate the design problem to result in an optimization problem that can be solved by using a weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The designed 2-D recursive PQMF bank achieves perfect magnitude response and possesses satisfactory phase response without requiring extra phase equalizer. Simulation results are also provided for illustration and comparison.
Abstract: In this paper, the application of the Mode Matching
(MM) method in the case of photonic crystal waveguide
discontinuities is presented. The structure under consideration is
divided into a number of cells, which supports a number of guided
and evanescent modes. These modes can be calculated numerically
by an alternative formulation of the plane wave expansion method
for each frequency. A matrix equation is then formed relating the
modal amplitudes at the beginning and at the end of the structure.
The theory is highly efficient and accurate and can be applied to
study the transmission sensitivity of photonic crystal devices due to
fabrication tolerances. The accuracy of the MM method is compared
to the Finite Difference Frequency Domain (FDFD) and the Adjoint
Variable Method (AVM) and good agreement is observed.