Abstract: Infrared focal plane arrays (IRFPA) sensors, due to
their high sensitivity, high frame frequency and simple structure, have
become the most prominently used detectors in military applications.
However, they suffer from a common problem called the fixed pattern
noise (FPN), which severely degrades image quality and limits the
infrared imaging applications. Therefore, it is necessary to perform
non-uniformity correction (NUC) on IR image. The algorithms of
non-uniformity correction are classified into two main categories, the
calibration-based and scene-based algorithms. There exist some
shortcomings in both algorithms, hence a novel non-uniformity
correction algorithm based on non-linear fit is proposed, which
combines the advantages of the two algorithms. Experimental results
show that the proposed algorithm acquires a good effect of NUC with
a lower non-uniformity ratio.
Abstract: For the past couple of decades Weak signal detection
is of crucial importance in various engineering and scientific
applications. It finds its application in areas like Wireless
communication, Radars, Aerospace engineering, Control systems and
many of those. Usually weak signal detection requires phase sensitive
detector and demodulation module to detect and analyze the signal.
This article gives you a preamble to intrusion detection system which
can effectively detect a weak signal from a multiplexed signal. By
carefully inspecting and analyzing the respective signal, this
system can successfully indicate any peripheral intrusion. Intrusion
detection system (IDS) is a comprehensive and easy approach
towards detecting and analyzing any signal that is weakened and
garbled due to low signal to noise ratio (SNR). This approach
finds significant importance in applications like peripheral security
systems.
Abstract: Eigenvector methods are gaining increasing acceptance in the area of spectrum estimation. This paper presents a successful attempt at testing and evaluating the performance of two of the most popular types of subspace techniques in determining the parameters of multiexponential signals with real decay constants buried in noise. In particular, MUSIC (Multiple Signal Classification) and minimum-norm techniques are examined. It is shown that these methods perform almost equally well on multiexponential signals with MUSIC displaying better defined peaks.
Abstract: In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.
Abstract: Data rate, tolerable bit error rate or frame error rate
and range & coverage are the key performance requirement of a
communication link. In this paper performance of MFSK link is
analyzed in terms of bit error rate, number of errors and total number
of data processed. In the communication link model proposed, which
is implemented using MATLAB block set, an improvement in BER
is observed. Different parameters which effects and enables to keep
BER low in M-ary communication system are also identified.
Abstract: Noise level has critical effects on the diagnostic
performance of signal-averaged electrocardiogram (SAECG), because
the true starting and end points of QRS complex would be masked by
the residual noise and sensitive to the noise level. Several studies and
commercial machines have used a fixed number of heart beats
(typically between 200 to 600 beats) or set a predefined noise level
(typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform
SAECG analysis. However different criteria or methods used to
perform SAECG would cause the discrepancies of the noise levels
among study subjects. According to the recommendations of 1991
ESC, AHA and ACC Task Force Consensus Document for the use of
SAECG, the determinations of onset and offset are related closely to
the mean and standard deviation of noise sample. Hence this study
would try to perform SAECG using consistent root-mean-square
(RMS) noise levels among study subjects and analyze the noise level
effects on SAECG. This study would also evaluate the differences
between normal subjects and chronic renal failure (CRF) patients in
the time-domain SAECG parameters.
The study subjects were composed of 50 normal Taiwanese and 20
CRF patients. During the signal-averaged processing, different RMS
noise levels were adjusted to evaluate their effects on three time
domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS
voltage of the last QRS 40 ms (RMS40), and (3) duration of the low
amplitude signals below 40 μV (LAS40). The study results
demonstrated that the reduction of RMS noise level can increase
fQRSD and LAS40 and decrease the RMS40, and can further increase
the differences of fQRSD and RMS40 between normal subjects and
CRF patients. The SAECG may also become abnormal due to the
reduction of RMS noise level. In conclusion, it is essential to establish
diagnostic criteria of SAECG using consistent RMS noise levels for
the reduction of the noise level effects.
Abstract: The performance of time-reversal MUSIC algorithm will be dramatically degrades in presence of strong noise and multiple scattering (i.e. when scatterers are close to each other). This is due to error in determining the number of scatterers. The present paper provides a new approach to alleviate such a problem using an information theoretic criterion referred as minimum description length (MDL). The merits of the novel approach are confirmed by the numerical examples. The results indicate the time-reversal MUSIC yields accurate estimate of the target locations with considerable noise and multiple scattering in the received signals.
Abstract: The information revealed by derivatives can help to
better characterize digital near-end crosstalk signatures with the
ultimate goal of identifying the specific aggressor signal.
Unfortunately, derivatives tend to be very sensitive to even low
levels of noise. In this work we approximated the derivatives of both
quiet and noisy digital signals using a wavelet-based technique. The
results are presented for Gaussian digital edges, IBIS Model digital
edges, and digital edges in oscilloscope data captured from an actual
printed circuit board. Tradeoffs between accuracy and noise
immunity are presented. The results show that the wavelet technique
can produce first derivative approximations that are accurate to
within 5% or better, even under noisy conditions. The wavelet
technique can be used to calculate the derivative of a digital signal
edge when conventional methods fail.
Abstract: This paper presented two new efficient algorithms
for contour approximation. The proposed algorithm is compared
with Ramer (good quality), Triangle (faster) and Trapezoid (fastest)
in this work; which are briefly described. Cartesian co-ordinates of
an input contour are processed in such a manner that finally
contours is presented by a set of selected vertices of the edge of the
contour. In the paper the main idea of the analyzed procedures for
contour compression is performed. For comparison, the mean
square error and signal-to-noise ratio criterions are used.
Computational time of analyzed methods is estimated depending on
a number of numerical operations. Experimental results are
obtained both in terms of image quality, compression ratios, and
speed. The main advantages of the analyzed algorithm is small
numbers of the arithmetic operations compared to the existing
algorithms.
Abstract: The aim of this study was to remove the two principal
noises which disturb the surface electromyography signal
(Diaphragm). These signals are the electrocardiogram ECG artefact
and the power line interference artefact. The algorithm proposed
focuses on a new Lean Mean Square (LMS) Widrow adaptive
structure. These structures require a reference signal that is correlated
with the noise contaminating the signal. The noise references are
then extracted : first with a noise reference mathematically
constructed using two different cosine functions; 50Hz (the
fundamental) function and 150Hz (the first harmonic) function for
the power line interference and second with a matching pursuit
technique combined to an LMS structure for the ECG artefact
estimation. The two removal procedures are attained without the use
of supplementary electrodes. These techniques of filtering are
validated on real records of surface diaphragm electromyography
signal. The performance of the proposed methods was compared with
already conducted research results.
Abstract: Self-compacting concrete (SCC), a new kind of high
performance concrete (HPC) have been first developed in Japan in
1986. The development of SCC has made casting of dense
reinforcement and mass concrete convenient, has minimized noise.
Fresh self-compacting concrete (SCC) flows into formwork and
around obstructions under its own weight to fill it completely and
self-compact (without any need for vibration), without any
segregation and blocking. The elimination of the need for
compaction leads to better quality concrete and substantial
improvement of working conditions. SCC mixes generally have a
much higher content of fine fillers, including cement, and produce
excessively high compressive strength concrete, which restricts its
field of application to special concrete only. To use SCC mixes in
general concrete construction practice, requires low cost materials to
make inexpensive concrete.
Rice husk ash (RHA) has been used as a highly reactive
pozzolanic material to improve the microstructure of the interfacial
transition zone (ITZ) between the cement paste and the aggregate in
self compacting concrete. Mechanical experiments of RHA blended
Portland cement concretes revealed that in addition to the pozzolanic
reactivity of RHA (chemical aspect), the particle grading (physical
aspect) of cement and RHA mixtures also exerted significant
influences on the blending efficiency.
The scope of this research was to determine the usefulness of Rice
husk ash (RHA) in the development of economical self compacting
concrete (SCC). The cost of materials will be decreased by reducing
the cement content by using waste material like rice husk ash instead
of.
This paper presents a study on the development of Mechanical
properties up to 180 days of self compacting and ordinary concretes
with rice-husk ash (RHA), from a rice paddy milling industry in
Rasht (Iran). Two different replacement percentages of cement by
RHA, 10%, and 20%, and two different water/cementicious material
ratios (0.40 and 0.35), were used for both of self compacting and
normal concrete specimens. The results are compared with those of
the self compacting concrete without RHA, with compressive,
flexural strength and modulus of elasticity. It is concluded that RHA
provides a positive effect on the Mechanical properties at age after
60 days.
Base of the result self compacting concrete specimens have higher
value than normal concrete specimens in all test except modulus of
elasticity. Also specimens with 20% replacement of cement by RHA
have the best performance.
Abstract: The paper presented a transient population dynamics of phase singularities in 2D Beeler-Reuter model. Two stochastic modelings are examined: (i) the Master equation approach with the transition rate (i.e., λ(n, t) = λ(t)n and μ(n, t) = μ(t)n) and (ii) the nonlinear Langevin equation approach with a multiplicative noise. The exact general solution of the Master equation with arbitrary time-dependent transition rate is given. Then, the exact solution of the mean field equation for the nonlinear Langevin equation is also given. It is demonstrated that transient population dynamics is successfully identified by the generalized Logistic equation with fractional higher order nonlinear term. It is also demonstrated the necessity of introducing time-dependent transition rate in the master equation approach to incorporate the effect of nonlinearity.
Abstract: To solve the problem of multisensor data fusion under
non-Gaussian channel noise. The advanced M-estimates are known
to be robust solution while trading off some accuracy. In order to
improve the estimation accuracy while still maintaining the equivalent
robustness, a two-stage robust fusion algorithm is proposed using
preliminary rejection of outliers then an optimal linear fusion. The
numerical experiments show that the proposed algorithm is equivalent
to the M-estimates in the case of uncorrelated local estimates and
significantly outperforms the M-estimates when local estimates are
correlated.
Abstract: The general idea behind the filter is to average a pixel
using other pixel values from its neighborhood, but simultaneously to
take care of important image structures such as edges. The main
concern of the proposed filter is to distinguish between any variations
of the captured digital image due to noise and due to image structure.
The edges give the image the appearance depth and sharpness. A
loss of edges makes the image appear blurred or unfocused.
However, noise smoothing and edge enhancement are traditionally
conflicting tasks. Since most noise filtering behaves like a low pass
filter, the blurring of edges and loss of detail seems a natural
consequence. Techniques to remedy this inherent conflict often
encompass generation of new noise due to enhancement.
In this work a new fuzzy filter is presented for the noise reduction
of images corrupted with additive noise. The filter consists of three
stages. (1) Define fuzzy sets in the input space to computes a fuzzy
derivative for eight different directions (2) construct a set of IFTHEN
rules by to perform fuzzy smoothing according to
contributions of neighboring pixel values and (3) define fuzzy sets in
the output space to get the filtered and edged image.
Experimental results are obtained to show the feasibility of the
proposed approach with two dimensional objects.
Abstract: Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Abstract: A new code for spectral-amplitude coding optical
code-division multiple-access system is proposed called Random
diagonal (RD) code. This code is constructed using code segment and
data segment. One of the important properties of this code is that the
cross correlation at data segment is always zero, which means that
Phase Intensity Induced Noise (PIIN) is reduced. For the performance
analysis, the effects of phase-induced intensity noise, shot noise, and
thermal noise are considered simultaneously. Bit-error rate (BER)
performance is compared with Hadamard and Modified Frequency
Hopping (MFH) codes. It is shown that the system using this new
code matrices not only suppress PIIN, but also allows larger number
of active users compare with other codes. Simulation results shown
that using point to point transmission with three encoded channels,
RD code has better BER performance than other codes, also its found
that at 0 dbm PIIN noise are 10-10 and 10-11 for RD and MFH
respectively.
Abstract: Image enhancement is the most important challenging preprocessing for almost all applications of Image Processing. By now, various methods such as Median filter, α-trimmed mean filter, etc. have been suggested. It was proved that the α-trimmed mean filter is the modification of median and mean filters. On the other hand, ε-filters have shown excellent performance in suppressing noise. In spite of their simplicity, they achieve good results. However, conventional ε-filter is based on moving average. In this paper, we suggested a new ε-filter which utilizes α-trimmed mean. We argue that this new method gives better outcomes compared to previous ones and the experimental results confirmed this claim.
Abstract: Face recognition is a technique to automatically
identify or verify individuals. It receives great attention in
identification, authentication, security and many more applications.
Diverse methods had been proposed for this purpose and also a lot of
comparative studies were performed. However, researchers could not
reach unified conclusion. In this paper, we are reporting an extensive
quantitative accuracy analysis of four most widely used face
recognition algorithms: Principal Component Analysis (PCA),
Independent Component Analysis (ICA), Linear Discriminant
Analysis (LDA) and Support Vector Machine (SVM) using AT&T,
Sheffield and Bangladeshi people face databases under diverse
situations such as illumination, alignment and pose variations.
Abstract: Neighborhood Rough Sets (NRS) has been proven to
be an efficient tool for heterogeneous attribute reduction. However,
most of researches are focused on dealing with complete and noiseless
data. Factually, most of the information systems are noisy, namely,
filled with incomplete data and inconsistent data. In this paper, we
introduce a generalized neighborhood rough sets model, called
VPTNRS, to deal with the problem of heterogeneous attribute
reduction in noisy system. We generalize classical NRS model with
tolerance neighborhood relation and the probabilistic theory.
Furthermore, we use the neighborhood dependency to evaluate the
significance of a subset of heterogeneous attributes and construct a
forward greedy algorithm for attribute reduction based on it.
Experimental results show that the model is efficient to deal with noisy
data.
Abstract: An array antenna system with innovative signal
processing can improve the resolution of a source direction of arrival
(DoA) estimation. High resolution techniques take the advantage of
array antenna structures to better process the incoming waves. They
also have the capability to identify the direction of multiple targets.
This paper investigates performance of the DOA estimation
algorithm namely; Capon and MUSIC on the uniform linear array
(ULA). The simulation results show that in Capon and MUSIC
algorithm the resolution of the DOA techniques improves as number
of snapshots, number of array elements, signal-to-noise ratio and
separation angle between the two sources θ increases.