Abstract: This paper describes reactive neural control used to
generate phototaxis and obstacle avoidance behavior of walking
machines. It utilizes discrete-time neurodynamics and consists of
two main neural modules: neural preprocessing and modular neural
control. The neural preprocessing network acts as a sensory fusion
unit. It filters sensory noise and shapes sensory data to drive the
corresponding reactive behavior. On the other hand, modular neural
control based on a central pattern generator is applied for locomotion
of walking machines. It coordinates leg movements and can generate
omnidirectional walking. As a result, through a sensorimotor loop this
reactive neural controller enables the machines to explore a dynamic
environment by avoiding obstacles, turn toward a light source, and
then stop near to it.
Abstract: Methods for organizing web data into groups in order
to analyze web-based hypertext data and facilitate data availability
are very important in terms of the number of documents available
online. Thereby, the task of clustering web-based document structures
has many applications, e.g., improving information retrieval on the
web, better understanding of user navigation behavior, improving web
users requests servicing, and increasing web information accessibility.
In this paper we investigate a new approach for clustering web-based
hypertexts on the basis of their graph structures. The hypertexts will
be represented as so called generalized trees which are more general
than usual directed rooted trees, e.g., DOM-Trees. As a important
preprocessing step we measure the structural similarity between the
generalized trees on the basis of a similarity measure d. Then,
we apply agglomerative clustering to the obtained similarity matrix
in order to create clusters of hypertext graph patterns representing
navigation structures. In the present paper we will run our approach
on a data set of hypertext structures and obtain good results in
Web Structure Mining. Furthermore we outline the application of
our approach in Web Usage Mining as future work.
Abstract: The iris recognition technology is the most accurate,
fast and less invasive one compared to other biometric techniques
using for example fingerprints, face, retina, hand geometry, voice or
signature patterns. The system developed in this study has the
potential to play a key role in areas of high-risk security and can
enable organizations with means allowing only to the authorized
personnel a fast and secure way to gain access to such areas. The
paper aim is to perform the iris region detection and iris inner and
outer boundaries localization. The system was implemented on
windows platform using Visual C# programming language. It is easy
and efficient tool for image processing to get great performance
accuracy. In particular, the system includes two main parts. The first
is to preprocess the iris images by using Canny edge detection
methods, segments the iris region from the rest of the image and
determine the location of the iris boundaries by applying Hough
transform. The proposed system tested on 756 iris images from 60
eyes of CASIA iris database images.
Abstract: The paper presents a method for multivariate time
series forecasting using Independent Component Analysis (ICA), as a preprocessing tool. The idea of this approach is to do the forecasting in the space of independent components (sources), and then to transform back the results to the original time series
space. The forecasting can be done separately and with a different
method for each component, depending on its time structure. The
paper gives also a review of the main algorithms for independent component analysis in the case of instantaneous mixture models, using second and high-order statistics. The method has been applied in simulation to an artificial multivariate time series
with five components, generated from three sources and a mixing matrix, randomly generated.
Abstract: A new target detection technique is presented in this
paper for the identification of small boats in coastal surveillance. The
proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any
objects present in the scene from the background. The preprocessing
step results in an image having only the foreground objects, such as
boats, trees and other cluttered regions, and hence reduces the search
region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform
correlator (SPFJTC) technique which produces single and delta-like
correlation peak for a potential target present in the input scene. A
post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the
proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.
Abstract: This paper proposes new hybrid approaches for face
recognition. Gabor wavelets representation of face images is an
effective approach for both facial action recognition and face
identification. Perform dimensionality reduction and linear
discriminate analysis on the down sampled Gabor wavelet faces can
increase the discriminate ability. Nearest feature space is extended to
various similarity measures. In our experiments, proposed Gabor
wavelet faces combined with extended neural net feature space
classifier shows very good performance, which can achieve 93 %
maximum correct recognition rate on ORL data set without any preprocessing
step.
Abstract: Liver segmentation is the first significant process for
liver diagnosis of the Computed Tomography. It segments the liver
structure from other abdominal organs. Sophisticated filtering techniques
are indispensable for a proper segmentation. In this paper, we
employ a 3D anisotropic diffusion as a preprocessing step. While
removing image noise, this technique preserve the significant parts
of the image, typically edges, lines or other details that are important
for the interpretation of the image. The segmentation task is done
by using thresholding with automatic threshold values selection and
finally the false liver region is eliminated using 3D connected component.
The result shows that by employing the 3D anisotropic filtering,
better liver segmentation results could be achieved eventhough simple
segmentation technique is used.
Abstract: In this paper we propose segmentation approach based
on Vector Quantization technique. Here we have used Kekre-s fast
codebook generation algorithm for segmenting low-altitude aerial
image. This is used as a preprocessing step to form segmented
homogeneous regions. Further to merge adjacent regions color
similarity and volume difference criteria is used. Experiments
performed with real aerial images of varied nature demonstrate that
this approach does not result in over segmentation or under
segmentation. The vector quantization seems to give far better results
as compared to conventional on-the-fly watershed algorithm.
Abstract: Heavy rainfall greatly affects the aerodynamic performance of the aircraft. There are many accidents of aircraft caused by aerodynamic efficiency degradation by heavy rain. In this Paper we have studied the heavy rain effects on the aerodynamic efficiency of NACA 64-210 & NACA 0012 airfoils. For our analysis, CFD method and preprocessing grid generator are used as our main analytical tools, and the simulation of rain is accomplished via two phase flow approach-s Discrete Phase Model (DPM). Raindrops are assumed to be non-interacting, non-deforming, non-evaporating and non-spinning spheres. Both airfoil sections exhibited significant reduction in lift and increase in drag for a given lift condition in simulated rain. The most significant difference between these two airfoils was the sensitivity of the NACA 64-210 to liquid water content (LWC), while NACA 0012 performance losses in the rain environment is not a function of LWC . It is expected that the quantitative information gained in this paper will be useful to the operational airline industry and greater effort such as small scale and full scale flight tests should put in this direction to further improve aviation safety.
Abstract: This paper proposes new enhancement models to the
methods of nonlinear anisotropic diffusion to greatly reduce speckle
and preserve image features in medical ultrasound images. By
incorporating local physical characteristics of the image, in this case
scatterer density, in addition to the gradient, into existing tensorbased
image diffusion methods, we were able to greatly improve the
performance of the existing filtering methods, namely edge
enhancing (EE) and coherence enhancing (CE) diffusion. The new
enhancement methods were tested using various ultrasound images,
including phantom and some clinical images, to determine the
amount of speckle reduction, edge, and coherence enhancements.
Scatterer density weighted nonlinear anisotropic diffusion
(SDWNAD) for ultrasound images consistently outperformed its
traditional tensor-based counterparts that use gradient only to weight
the diffusivity function. SDWNAD is shown to greatly reduce
speckle noise while preserving image features as edges, orientation
coherence, and scatterer density. SDWNAD superior performances
over nonlinear coherent diffusion (NCD), speckle reducing
anisotropic diffusion (SRAD), adaptive weighted median filter
(AWMF), wavelet shrinkage (WS), and wavelet shrinkage with
contrast enhancement (WSCE), make these methods ideal
preprocessing steps for automatic segmentation in ultrasound
imaging.
Abstract: Effectiveness of Artificial Neural Networks (ANN)
and Support Vector Machines (SVM) classifiers for fault diagnosis of
rolling element bearings are presented in this paper. The
characteristic features of vibration signals of rotating driveline that
was run in its normal condition and with faults introduced were used
as input to ANN and SVM classifiers. Simple statistical features such
as standard deviation, skewness, kurtosis etc. of the time-domain
vibration signal segments along with peaks of the signal and peak of
power spectral density (PSD) are used as features to input the ANN
and SVM classifier. The effect of preprocessing of the vibration
signal by Discreet Wavelet Transform (DWT) prior to feature
extraction is also studied. It is shown from the experimental results
that the performance of SVM classifier in identification of bearing
condition is better then ANN and pre-processing of vibration signal
by DWT enhances the effectiveness of both ANN and SVM classifier
Abstract: In this paper, we propose a practical digital music matching system that is robust to variation in sound qualities. The proposed system is subdivided into two parts: client and server. The client part consists of the input, preprocessing and feature extraction modules. The preprocessing module, including the music onset module, revises the value gap occurring on the time axis between identical songs of different formats. The proposed method uses delta-grouped Mel frequency cepstral coefficients (MFCCs) to extract music features that are robust to changes in sound quality. According to the number of sound quality formats (SQFs) used, a music server is constructed with a feature database (FD) that contains different sub feature databases (SFDs). When the proposed system receives a music file, the selection module selects an appropriate SFD from a feature database; the selected SFD is subsequently used by the matching module. In this study, we used 3,000 queries for matching experiments in three cases with different FDs. In each case, we used 1,000 queries constructed by mixing 8 SQFs and 125 songs. The success rate of music matching improved from 88.6% when using single a single SFD to 93.2% when using quadruple SFDs. By this experiment, we proved that the proposed method is robust to various sound qualities.
Abstract: Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.
Abstract: A large amount of valuable information is available in
plain text clinical reports. New techniques and technologies are
applied to extract information from these reports. In this study, we
developed a domain based software system to transform 600
Otorhinolaryngology discharge notes to a structured form for
extracting clinical data from the discharge notes. In order to decrease
the system process time discharge notes were transformed into a data
table after preprocessing. Several word lists were constituted to
identify common section in the discharge notes, including patient
history, age, problems, and diagnosis etc. N-gram method was used
for discovering terms co-Occurrences within each section. Using this
method a dataset of concept candidates has been generated for the
validation step, and then Predictive Apriori algorithm for Association
Rule Mining (ARM) was applied to validate candidate concepts.
Abstract: In this study, an investigation over digestive diseases has been done in which the sound acts as a detector medium. Pursue to the preprocessing the extracted signal in cepstrum domain is registered. After classification of digestive diseases, the system selects random samples based on their features and generates the interest nonstationary, long-term signals via inverse transform in cepstral domain which is presented in digital and sonic form as the output. This structure is updatable or on the other word, by receiving a new signal the corresponding disease classification is updated in the feature domain.
Abstract: Nowadays, hand vein recognition has attracted more attentions in identification biometrics systems. Generally, hand vein image is acquired with low contrast and irregular illumination. Accordingly, if you have a good preprocessing of hand vein image, we can easy extracted the feature extraction even with simple binarization. In this paper, a proposed approach is processed to improve the quality of hand vein image. First, a brief survey on existing methods of enhancement is investigated. Then a Radon Like features method is applied to preprocessing hand vein image. Finally, experiments results show that the proposed method give the better effective and reliable in improving hand vein images.
Abstract: A sequential decision problem, based on the task ofidentifying the species of trees given acoustic echo data collectedfrom them, is considered with well-known stochastic classifiers,including single and mixture Gaussian models. Echoes are processedwith a preprocessing stage based on a model of mammalian cochlearfiltering, using a new discrete low-pass filter characteristic. Stoppingtime performance of the sequential decision process is evaluated andcompared. It is observed that the new low pass filter processingresults in faster sequential decisions.
Abstract: In text categorization problem the most used method
for documents representation is based on words frequency vectors
called VSM (Vector Space Model). This representation is based only
on words from documents and in this case loses any “word context"
information found in the document. In this article we make a
comparison between the classical method of document representation
and a method called Suffix Tree Document Model (STDM) that is
based on representing documents in the Suffix Tree format. For the
STDM model we proposed a new approach for documents
representation and a new formula for computing the similarity
between two documents. Thus we propose to build the suffix tree
only for any two documents at a time. This approach is faster, it has
lower memory consumption and use entire document representation
without using methods for disposing nodes. Also for this method is
proposed a formula for computing the similarity between documents,
which improves substantially the clustering quality. This
representation method was validated using HAC - Hierarchical
Agglomerative Clustering. In this context we experiment also the
stemming influence in the document preprocessing step and highlight
the difference between similarity or dissimilarity measures to find
“closer" documents.
Abstract: Image enhancement is the most important challenging preprocessing for almost all applications of Image Processing. By now, various methods such as Median filter, α-trimmed mean filter, etc. have been suggested. It was proved that the α-trimmed mean filter is the modification of median and mean filters. On the other hand, ε-filters have shown excellent performance in suppressing noise. In spite of their simplicity, they achieve good results. However, conventional ε-filter is based on moving average. In this paper, we suggested a new ε-filter which utilizes α-trimmed mean. We argue that this new method gives better outcomes compared to previous ones and the experimental results confirmed this claim.
Abstract: Smoothing or filtering of data is first preprocessing step
for noise suppression in many applications involving data analysis.
Moving average is the most popular method of smoothing the data,
generalization of this led to the development of Savitzky-Golay filter.
Many window smoothing methods were developed by convolving
the data with different window functions for different applications;
most widely used window functions are Gaussian or Kaiser. Function
approximation of the data by polynomial regression or Fourier
expansion or wavelet expansion also gives a smoothed data. Wavelets
also smooth the data to great extent by thresholding the wavelet
coefficients. Almost all smoothing methods destroys the peaks and
flatten them when the support of the window is increased. In certain
applications it is desirable to retain peaks while smoothing the data
as much as possible. In this paper we present a methodology called
as peak-wise smoothing that will smooth the data to any desired level
without losing the major peak features.