Abstract: Current image-based individual human recognition
methods, such as fingerprints, face, or iris biometric modalities
generally require a cooperative subject, views from certain aspects,
and physical contact or close proximity. These methods cannot
reliably recognize non-cooperating individuals at a distance in the
real world under changing environmental conditions. Gait, which
concerns recognizing individuals by the way they walk, is a relatively
new biometric without these disadvantages. The inherent gait
characteristic of an individual makes it irreplaceable and useful in
visual surveillance.
In this paper, an efficient gait recognition system for human
identification by extracting two features namely width vector of
the binary silhouette and the MPEG-7-based region-based shape
descriptors is proposed. In the proposed method, foreground objects
i.e., human and other moving objects are extracted by estimating
background information by a Gaussian Mixture Model (GMM) and
subsequently, median filtering operation is performed for removing
noises in the background subtracted image. A moving target classification
algorithm is used to separate human being (i.e., pedestrian)
from other foreground objects (viz., vehicles). Shape and boundary
information is used in the moving target classification algorithm.
Subsequently, width vector of the outer contour of binary silhouette
and the MPEG-7 Angular Radial Transform coefficients are taken as
the feature vector. Next, the Principal Component Analysis (PCA)
is applied to the selected feature vector to reduce its dimensionality.
These extracted feature vectors are used to train an Hidden Markov
Model (HMM) for identification of some individuals. The proposed
system is evaluated using some gait sequences and the experimental
results show the efficacy of the proposed algorithm.
Abstract: It is known that if harmonic spectra are decreased, then
acoustic noise also decreased. Hence, this paper deals with a new
random switching strategy using DSP TMS320F2812 to decrease the
harmonics spectra of single phase switched reluctance motor. The
proposed method which combines random turn-on, turn-off angle
technique and random pulse width modulation technique is shown. A
harmonic spread factor (HSF) is used to evaluate the random
modulation scheme. In order to confirm the effectiveness of the new
method, the experimental results show that the harmonic intensity of
output voltage for the proposed method is better than that for
conventional methods.
Abstract: We present a new method to reconstruct a temporally
coherent 3D animation from single or multi-view RGB-D video data
using unbiased feature point sampling. Given RGB-D video data, in
form of a 3D point cloud sequence, our method first extracts feature
points using both color and depth information. In the subsequent
steps, these feature points are used to match two 3D point clouds in
consecutive frames independent of their resolution. Our new motion
vectors based dynamic alignement method then fully reconstruct
a spatio-temporally coherent 3D animation. We perform extensive
quantitative validation using novel error functions to analyze the
results. We show that despite the limiting factors of temporal and
spatial noise associated to RGB-D data, it is possible to extract
temporal coherence to faithfully reconstruct a temporally coherent
3D animation from RGB-D video data.
Abstract: This paper evaluates performances of an adaptive noise
cancelling (ANC) based target detection algorithm on a set of real test
data supported by the Defense Evaluation Research Agency (DERA
UK) for multi-target wideband active sonar echolocation system. The
hybrid algorithm proposed is a combination of an adaptive ANC
neuro-fuzzy scheme in the first instance and followed by an iterative
optimum target motion estimation (TME) scheme. The neuro-fuzzy
scheme is based on the adaptive noise cancelling concept with the
core processor of ANFIS (adaptive neuro-fuzzy inference system) to
provide an effective fine tuned signal. The resultant output is then
sent as an input to the optimum TME scheme composed of twogauge
trimmed-mean (TM) levelization, discrete wavelet denoising
(WDeN), and optimal continuous wavelet transform (CWT) for
further denosing and targets identification. Its aim is to recover the
contact signals in an effective and efficient manner and then determine
the Doppler motion (radial range, velocity and acceleration) at very
low signal-to-noise ratio (SNR). Quantitative results have shown that
the hybrid algorithm have excellent performance in predicting targets-
Doppler motion within various target strength with the maximum
false detection of 1.5%.
Abstract: This paper introduces new algorithms (Fuzzy relative
of the CLARANS algorithm FCLARANS and Fuzzy c Medoids
based on randomized search FCMRANS) for fuzzy clustering of
relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd)
in which the within cluster dissimilarity of each cluster is minimized
in each iteration by recomputing new medoids given current
memberships, FCLARANS minimizes the same objective function
minimized by FCMdd by changing current medoids in such away
that that the sum of the within cluster dissimilarities is minimized.
Computing new medoids may be effected by noise because outliers
may join the computation of medoids while the choice of medoids in
FCLARANS is dictated by the location of a predominant fraction of
points inside a cluster and, therefore, it is less sensitive to the
presence of outliers. In FCMRANS the step of computing new
medoids in FCMdd is modified to be based on randomized search.
Furthermore, a new initialization procedure is developed that add
randomness to the initialization procedure used with FCMdd. Both
FCLARANS and FCMRANS are compared with the robust and
linearized version of fuzzy c-medoids (RFCMdd). Experimental
results with different samples of the Reuter-21578, Newsgroups
(20NG) and generated datasets with noise show that FCLARANS is
more robust than both RFCMdd and FCMRANS. Finally, both
FCMRANS and FCLARANS are more efficient and their outputs
are almost the same as that of RFCMdd in terms of classification
rate.
Abstract: The purpose of this study was to evaluate and
compare new indices based on the discrete wavelet transform
with another spectral parameters proposed in the literature as
mean average voltage, median frequency and ratios between
spectral moments applied to estimate acute exercise-induced
changes in power output, i.e., to assess peripheral muscle
fatigue during a dynamic fatiguing protocol. 15 trained
subjects performed 5 sets consisting of 10 leg press, with 2
minutes rest between sets. Surface electromyography was
recorded from vastus medialis (VM) muscle. Several surface
electromyographic parameters were compared to detect
peripheral muscle fatigue. These were: mean average voltage
(MAV), median spectral frequency (Fmed), Dimitrov spectral
index of muscle fatigue (FInsm5), as well as other five
parameters obtained from the discrete wavelet transform
(DWT) as ratios between different scales. The new wavelet
indices achieved the best results in Pearson correlation
coefficients with power output changes during acute dynamic
contractions. Their regressions were significantly different
from MAV and Fmed. On the other hand, they showed the
highest robustness in presence of additive white gaussian
noise for different signal to noise ratios (SNRs). Therefore,
peripheral impairments assessed by sEMG wavelet indices
may be a relevant factor involved in the loss of power output
after dynamic high-loading fatiguing task.
Abstract: In this work, we improve a previously developed
segmentation scheme aimed at extracting edge information from
speckled images using a maximum likelihood edge detector. The
scheme was based on finding a threshold for the probability density
function of a new kernel defined as the arithmetic mean-to-geometric
mean ratio field over a circular neighborhood set and, in a general
context, is founded on a likelihood random field model (LRFM). The
segmentation algorithm was applied to discriminated speckle areas
obtained using simple elliptic discriminant functions based on
measures of the signal-to-noise ratio with fractional order moments.
A rigorous stochastic analysis was used to derive an exact expression
for the cumulative density function of the probability density
function of the random field. Based on this, an accurate probability
of error was derived and the performance of the scheme was
analysed. The improved segmentation scheme performed well for
both simulated and real images and showed superior results to those
previously obtained using the original LRFM scheme and standard
edge detection methods. In particular, the false alarm probability was
markedly lower than that of the original LRFM method with
oversegmentation artifacts virtually eliminated. The importance of
this work lies in the development of a stochastic-based segmentation,
allowing an accurate quantification of the probability of false
detection. Non visual quantification and misclassification in medical
ultrasound speckled images is relatively new and is of interest to
clinicians.
Abstract: In this study, workplace environmental monitoring
systems were established using USN(Ubiquitous Sensor Networks)
and LabVIEW. Although existing direct sampling methods enable
finding accurate values as of the time points of measurement, those
methods are disadvantageous in that continuous management and
supervision are difficult and costs for are high when those methods are
used. Therefore, the efficiency and reliability of workplace
management by supervisors are relatively low when those methods are
used. In this study, systems were established so that information on
workplace environmental factors such as temperatures, humidity and
noises is measured and transmitted to the PC in real time to enable
supervisors to monitor workplaces through LabVIEW on the PC.
When any accidents have occurred in workplaces, supervisors can
immediately respond through the monitoring system and this system
enables integrated workplace management and the prevention of
safety accidents. By introducing these monitoring systems, safety
accidents due to harmful environmental factors in workplaces can be
prevented and these monitoring systems will be also helpful in finding
out the correlation between safety accidents and occupational diseases
by comparing and linking databases established by this monitoring
system with existing statistical data.
Abstract: In this paper is study the possibility of successfully
implementing of hollow roller concept in order to minimize inertial
mass of the large bearings, with major results in diminution of the
material consumption, increasing of power efficiency (in wind power
station area), increasing of the durability and life duration of the large
bearings systems, noise reduction in working, resistance to
vibrations, an important diminution of losses by abrasion and
reduction of the working temperature. In this purpose was developed
an original solution through which are reduced mass, inertial forces
and moments of large bearings by using of hollow rollers. The
research was made by using the method of finite element analysis
applied on software type Solidworks - Nastran. Also, is study the
possibility of rapidly changing the manufacturing system of solid and
hollow cylindrical rollers.
Abstract: Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE workand output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-by-stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.
Abstract: The article deals with technical support of intracranial single unit activity measurement. The parameters of the whole measuring set were tested in order to assure the optimal conditions of extracellular single-unit recording. Metal microelectrodes for measuring the single-unit were tested during animal experiments. From signals recorded during these experiments, requirements for the measuring set parameters were defined. The impedance parameters of the metal microelectrodes were measured. The frequency-gain and autonomous noise properties of preamplifier and amplifier were verified. The measurement and the description of the extracellular single unit activity could help in prognoses of brain tissue damage recovery.
Abstract: An attractor neural network on the small-world topology
is studied. A learning pattern is presented to the network, then
a stimulus carrying local information is applied to the neurons and
the retrieval of block-like structure is investigated. A synaptic noise
decreases the memory capability. The change of stability from local
to global attractors is shown to depend on the long-range character
of the network connectivity.
Abstract: Prostate cancer is one of the most frequent cancers in men and is a major cause of mortality in the most of countries. In many diagnostic and treatment procedures for prostate disease accurate detection of prostate boundaries in transrectal ultrasound (TRUS) images is required. This is a challenging and difficult task due to weak prostate boundaries, speckle noise and the short range of gray levels. In this paper a novel method for automatic prostate segmentation in TRUS images is presented. This method involves preprocessing (edge preserving noise reduction and smoothing) and prostate segmentation. The speckle reduction has been achieved by using stick filter and top-hat transform has been implemented for smoothing. A feed forward neural network and local binary pattern together have been use to find a point inside prostate object. Finally the boundary of prostate is extracted by the inside point and an active contour algorithm. A numbers of experiments are conducted to validate this method and results showed that this new algorithm extracted the prostate boundary with MSE less than 4.6% relative to boundary provided manually by physicians.
Abstract: The objective of the present communication is to
develop new genuine exponentiated mean codeword lengths and to
study deeply the problem of correspondence between well known
measures of entropy and mean codeword lengths. With the help of
some standard measures of entropy, we have illustrated such a
correspondence. In literature, we usually come across many
inequalities which are frequently used in information theory.
Keeping this idea in mind, we have developed such inequalities via
coding theory approach.
Abstract: A high-frequency low-power sinusoidal quadrature
oscillator is presented through the use of two 2nd-order low-pass
current-mirror (CM)-based filters, a 1st-order CM low-pass filter and
a CM bilinear transfer function. The technique is relatively simple
based on (i) inherent time constants of current mirrors, i.e. the
internal capacitances and the transconductance of a diode-connected
NMOS, (ii) a simple negative resistance RN formed by a resistor load
RL of a current mirror. Neither external capacitances nor inductances
are required. As a particular example, a 1.9-GHz, 0.45-mW, 2-V
CMOS low-pass-filter-based all-current-mirror sinusoidal quadrature
oscillator is demonstrated. The oscillation frequency (f0) is 1.9 GHz
and is current-tunable over a range of 370 MHz or 21.6 %. The
power consumption is at approximately 0.45 mW. The amplitude
matching and the quadrature phase matching are better than 0.05 dB
and 0.15°, respectively. Total harmonic distortions (THD) are less
than 0.3 %. At 2 MHz offset from the 1.9 GHz, the carrier to noise
ratio (CNR) is 90.01 dBc/Hz whilst the figure of merit called a
normalized carrier-to-noise ratio (CNRnorm) is 153.03 dBc/Hz. The
ratio of the oscillation frequency (f0) to the unity-gain frequency (fT)
of a transistor is 0.25. Comparisons to other approaches are also
included.
Abstract: By employing BS (Base Station) cooperation we can
increase substantially the spectral efficiency and capacity of cellular
systems. The signals received at each BS are sent to a central unit that
performs the separation of the different MT (Mobile Terminal) using
the same physical channel. However, we need accurate sampling and
quantization of those signals so as to reduce the backhaul
communication requirements.
In this paper we consider the optimization of the quantizers for BS
cooperation systems. Four different quantizer types are analyzed and
optimized to allow better SQNR (Signal-to-Quantization Noise
Ratio) and BER (Bit Error Rate) performance.
Abstract: This paper presents a new method of analog fault diagnosis based on back-propagation neural networks (BPNNs) using wavelet decomposition and fractal dimension as preprocessors. The proposed method has the capability to detect and identify faulty components in an analog electronic circuit with tolerance by analyzing its impulse response. Using wavelet decomposition to preprocess the impulse response drastically de-noises the inputs to the neural network. The second preprocessing by fractal dimension can extract unique features, which are the fed to a neural network as inputs for further classification. A comparison of our work with [1] and [6], which also employs back-propagation (BP) neural networks, reveals that our system requires a much smaller network and performs significantly better in fault diagnosis of analog circuits due to our proposed preprocessing techniques.
Abstract: Median filter is widely used to remove impulse noise
without blurring sharp edges. However, when noise level increased,
or with thin edges, median filter may work poorly. This paper
proposes a new filter, which will detect edges along four possible
directions, and then replace noise corrupted pixel with estimated
noise-free edge median value. Simulations show that the proposed
multi-stage directional median filter can provide excellent
performance of suppressing impulse noise in all situations.
Abstract: This paper presents the source extraction system which can extract only target signals with constraints on source localization in on-line systems. The proposed system is a kind of methods for enhancing a target signal and suppressing other interference signals. But, the performance of proposed system is superior to any other methods and the extraction of target source is comparatively complete. The method has a beamforming concept and uses an improved time-frequency (TF) mask-based BSS algorithm to separate a target signal from multiple noise sources. The target sources are assumed to be in front and test data was recorded in a reverberant room. The experimental results of the proposed method was evaluated by the PESQ score of real-recording sentences and showed a noticeable speech enhancement.
Abstract: In this paper we present a generic approach for the problem of the blind estimation of the parameters of linear and convolutional error correcting codes. In a non-cooperative context, an adversary has only access to the noised transmission he has intercepted. The intercepter has no knowledge about the parameters used by the legal users. So, before having acess to the information he has first to blindly estimate the parameters of the error correcting code of the communication. The presented approach has the main advantage that the problem of reconstruction of such codes can be expressed in a very simple way. This allows us to evaluate theorical bounds on the complexity of the reconstruction process but also bounds on the estimation rate. We show that some classical reconstruction techniques are optimal and also explain why some of them have theorical complexities greater than these experimentally observed.