Abstract: In these days, multimedia data is transmitted and
processed in compressed format. Due to the decoding procedure and
filtering for edge detection, the feature extraction process of MPEG-7
Edge Histogram Descriptor is time-consuming as well as
computationally expensive. To improve efficiency of compressed
image retrieval, we propose a new edge histogram generation
algorithm in DCT domain in this paper. Using the edge information
provided by only two AC coefficients of DCT coefficients, we can get
edge directions and strengths directly in DCT domain. The
experimental results demonstrate that our system has good
performance in terms of retrieval efficiency and effectiveness.
Abstract: State-of-the-art methods for secondary structure (Porter, Psi-PRED, SAM-T99sec, Sable) and solvent accessibility (Sable, ACCpro) predictions use evolutionary profiles represented by the position specific scoring matrix (PSSM). It has been demonstrated that evolutionary profiles are the most important features in the feature space for these predictions. Unfortunately applying PSSM matrix leads to high dimensional feature spaces that may create problems with parameter optimization and generalization. Several recently published suggested that applying feature extraction for the PSSM matrix may result in improvements in secondary structure predictions. However, none of the top performing methods considered here utilizes dimensionality reduction to improve generalization. In the present study, we used simple and fast methods for features selection (t-statistics, information gain) that allow us to decrease the dimensionality of PSSM matrix by 75% and improve generalization in the case of secondary structure prediction compared to the Sable server.
Abstract: The noteworthy point in the advancement of Brain Machine Interface (BMI) research is the ability to accurately extract features of the brain signals and to classify them into targeted control action with the easiest procedures since the expected beneficiaries are of disabled. In this paper, a new feature extraction method using the combination of adaptive band pass filters and adaptive autoregressive (AAR) modelling is proposed and applied to the classification of right and left motor imagery signals extracted from the brain. The introduction of the adaptive bandpass filter improves the characterization process of the autocorrelation functions of the AAR models, as it enhances and strengthens the EEG signal, which is noisy and stochastic in nature. The experimental results on the Graz BCI data set have shown that by implementing the proposed feature extraction method, a LDA and SVM classifier outperforms other AAR approaches of the BCI 2003 competition in terms of the mutual information, the competition criterion, or misclassification rate.
Abstract: The myoelectric signal (MES) is one of the Biosignals
utilized in helping humans to control equipments. Recent approaches
in MES classification to control prosthetic devices employing pattern
recognition techniques revealed two problems, first, the classification
performance of the system starts degrading when the number of
motion classes to be classified increases, second, in order to solve the
first problem, additional complicated methods were utilized which
increase the computational cost of a multifunction myoelectric
control system. In an effort to solve these problems and to achieve a
feasible design for real time implementation with high overall
accuracy, this paper presents a new method for feature extraction in
MES recognition systems. The method works by extracting features
using Wavelet Packet Transform (WPT) applied on the MES from
multiple channels, and then employs Fuzzy c-means (FCM)
algorithm to generate a measure that judges on features suitability for
classification. Finally, Principle Component Analysis (PCA) is
utilized to reduce the size of the data before computing the
classification accuracy with a multilayer perceptron neural network.
The proposed system produces powerful classification results (99%
accuracy) by using only a small portion of the original feature set.
Abstract: In this study, the problem of discriminating between interictal epileptic and non- epileptic pathological EEG cases, which present episodic loss of consciousness, investigated. We verify the accuracy of the feature extraction method of autocross-correlated coefficients which extracted and studied in previous study. For this purpose we used in one hand a suitable constructed artificial supervised LVQ1 neural network and in other a cross-correlation technique. To enforce the above verification we used a statistical procedure which based on a chi- square control. The classification and the statistical results showed that the proposed feature extraction is a significant accurate method for diagnostic discrimination cases between interictal and non-interictal EEG events and specifically the classification procedure showed that the LVQ neural method is superior than the cross-correlation one.
Abstract: As a result of the daily workflow in the design
development departments of companies, databases containing huge
numbers of 3D geometric models are generated. According to the
given problem engineers create CAD drawings based on their design
ideas and evaluate the performance of the resulting design, e.g. by
computational simulations. Usually, new geometries are built either
by utilizing and modifying sets of existing components or by adding
single newly designed parts to a more complex design.
The present paper addresses the two facets of acquiring
components from large design databases automatically and providing
a reasonable overview of the parts to the engineer. A unified
framework based on the topographic non-negative matrix
factorization (TNMF) is proposed which solves both aspects
simultaneously. First, on a given database meaningful components
are extracted into a parts-based representation in an unsupervised
manner. Second, the extracted components are organized and
visualized on square-lattice 2D maps. It is shown on the example of
turbine-like geometries that these maps efficiently provide a wellstructured
overview on the database content and, at the same time,
define a measure for spatial similarity allowing an easy access and
reuse of components in the process of design development.
Abstract: Recognition of characters greatly depends upon the features used. Several features of the handwritten Arabic characters are selected and discussed. An off-line recognition system based on the selected features was built. The system was trained and tested with realistic samples of handwritten Arabic characters. Evaluation of the importance and accuracy of the selected features is made. The recognition based on the selected features give average accuracies of 88% and 70% for the numbers and letters, respectively. Further improvements are achieved by using feature weights based on insights gained from the accuracies of individual features.
Abstract: A lot of matching algorithms with different characteristics have been introduced in recent years. For real time systems these algorithms are usually based on minutiae features. In this paper we introduce a novel approach for feature extraction in which the extracted features are independent of shift and rotation of the fingerprint and at the meantime the matching operation is performed much more easily and with higher speed and accuracy. In this new approach first for any fingerprint a reference point and a reference orientation is determined and then based on this information features are converted into polar coordinates. Due to high speed and accuracy of this approach and small volume of extracted features and easily execution of matching operation this approach is the most appropriate for real time applications.
Abstract: In this paper a one-dimension Self Organizing Map
algorithm (SOM) to perform feature selection is presented. The
algorithm is based on a first classification of the input dataset on a
similarity space. From this classification for each class a set of
positive and negative features is computed. This set of features is
selected as result of the procedure. The procedure is evaluated on an
in-house dataset from a Knowledge Discovery from Text (KDT)
application and on a set of publicly available datasets used in
international feature selection competitions. These datasets come
from KDT applications, drug discovery as well as other applications.
The knowledge of the correct classification available for the training
and validation datasets is used to optimize the parameters for positive
and negative feature extractions. The process becomes feasible for
large and sparse datasets, as the ones obtained in KDT applications,
by using both compression techniques to store the similarity matrix
and speed up techniques of the Kohonen algorithm that take
advantage of the sparsity of the input matrix. These improvements
make it feasible, by using the grid, the application of the
methodology to massive datasets.
Abstract: Content-Based Image Retrieval has been a major area
of research in recent years. Efficient image retrieval with high
precision would require an approach which combines usage of both
the color and texture features of the image. In this paper we propose
a method for enhancing the capabilities of texture based feature
extraction and further demonstrate the use of these enhanced texture
features in Texture-Based Color Image Retrieval.
Abstract: The various applications of VLSI circuits in highperformance
computing, telecommunications, and consumer
electronics has been expanding progressively, and at a very hasty
pace. This paper describes a new model for partitioning a circuit
using DBSCAN and fuzzy ARTMAP neural network. The first step
is concerned with feature extraction, where we had make use
DBSCAN algorithm. The second step is the classification and is
composed of a fuzzy ARTMAP neural network. The performance of
both approaches is compared using benchmark data provided by
MCNC standard cell placement benchmark netlists. Analysis of the
investigational results proved that the fuzzy ARTMAP with
DBSCAN model achieves greater performance then only fuzzy
ARTMAP in recognizing sub-circuits with lowest amount of
interconnections between them The recognition rate using fuzzy
ARTMAP with DBSCAN is 97.7% compared to only fuzzy
ARTMAP.
Abstract: Image clustering is a process of grouping images
based on their similarity. The image clustering usually uses the color
component, texture, edge, shape, or mixture of two components, etc.
This research aims to explore image clustering using color
composition. In order to complete this image clustering, three main
components should be considered, which are color space, image
representation (feature extraction), and clustering method itself. We
aim to explore which composition of these factors will produce the
best clustering results by combining various techniques from the
three components. The color spaces use RGB, HSV, and L*a*b*
method. The image representations use Histogram and Gaussian
Mixture Model (GMM), whereas the clustering methods use KMeans
and Agglomerative Hierarchical Clustering algorithm. The
results of the experiment show that GMM representation is better
combined with RGB and L*a*b* color space, whereas Histogram is
better combined with HSV. The experiments also show that K-Means
is better than Agglomerative Hierarchical for images clustering.
Abstract: The recognition of human faces, especially those with
different orientations is a challenging and important problem in image
analysis and classification. This paper proposes an effective scheme
for rotation invariant face recognition using Log-Polar Transform and
Discrete Cosine Transform combined features. The rotation invariant
feature extraction for a given face image involves applying the logpolar
transform to eliminate the rotation effect and to produce a row
shifted log-polar image. The discrete cosine transform is then applied
to eliminate the row shift effect and to generate the low-dimensional
feature vector. A PSO-based feature selection algorithm is utilized to
search the feature vector space for the optimal feature subset.
Evolution is driven by a fitness function defined in terms of
maximizing the between-class separation (scatter index).
Experimental results, based on the ORL face database using testing
data sets for images with different orientations; show that the
proposed system outperforms other face recognition methods. The
overall recognition rate for the rotated test images being 97%,
demonstrating that the extracted feature vector is an effective rotation
invariant feature set with minimal set of selected features.
Abstract: In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Abstract: In face recognition, feature extraction techniques
attempts to search for appropriate representation of the data. However,
when the feature dimension is larger than the samples size, it brings
performance degradation. Hence, we propose a method called
Normalization Discriminant Independent Component Analysis
(NDICA). The input data will be regularized to obtain the most
reliable features from the data and processed using Independent
Component Analysis (ICA). The proposed method is evaluated on
three face databases, Olivetti Research Ltd (ORL), Face Recognition
Technology (FERET) and Face Recognition Grand Challenge
(FRGC). NDICA showed it effectiveness compared with other
unsupervised and supervised techniques.
Abstract: A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.
Abstract: Natural outdoor scene classification is active and
promising research area around the globe. In this study, the
classification is carried out in two phases. In the first phase, the
features are extracted from the images by wavelet decomposition
method and stored in a database as feature vectors. In the second
phase, the neural classifiers such as back-propagation neural network
(BPNN) and resilient back-propagation neural network (RPNN) are
employed for the classification of scenes. Four hundred color images
are considered from MIT database of two classes as forest and street.
A comparative study has been carried out on the performance of the
two neural classifiers BPNN and RPNN on the increasing number of
test samples. RPNN showed better classification results compared to
BPNN on the large test samples.
Abstract: This paper presents an ESN-based Arabic phoneme
recognition system trained with supervised, forced and combined
supervised/forced supervised learning algorithms. Mel-Frequency
Cepstrum Coefficients (MFCCs) and Linear Predictive Code (LPC)
techniques are used and compared as the input feature extraction
technique. The system is evaluated using 6 speakers from the King
Abdulaziz Arabic Phonetics Database (KAPD) for Saudi Arabia
dialectic and 34 speakers from the Center for Spoken Language
Understanding (CSLU2002) database of speakers with different
dialectics from 12 Arabic countries. Results for the KAPD and
CSLU2002 Arabic databases show phoneme recognition
performances of 72.31% and 38.20% respectively.
Abstract: In this paper we present the deep study about the Bio-
Medical Images and tag it with some basic extracting features (e.g.
color, pixel value etc). The classification is done by using a nearest
neighbor classifier with various distance measures as well as the
automatic combination of classifier results. This process selects a
subset of relevant features from a group of features of the image. It
also helps to acquire better understanding about the image by
describing which the important features are. The accuracy can be
improved by increasing the number of features selected. Various
types of classifications were evolved for the medical images like
Support Vector Machine (SVM) which is used for classifying the
Bacterial types. Ant Colony Optimization method is used for optimal
results. It has high approximation capability and much faster
convergence, Texture feature extraction method based on Gabor
wavelets etc..
Abstract: Clusters of microcalcifications in mammograms are an
important sign of breast cancer. This paper presents a complete
Computer Aided Detection (CAD) scheme for automatic detection of
clustered microcalcifications in digital mammograms. The proposed
system, MammoScan μCaD, consists of three main steps. Firstly
all potential microcalcifications are detected using a a method for
feature extraction, VarMet, and adaptive thresholding. This will also
give a number of false detections. The goal of the second step,
Classifier level 1, is to remove everything but microcalcifications.
The last step, Classifier level 2, uses learned dictionaries and sparse
representations as a texture classification technique to distinguish
single, benign microcalcifications from clustered microcalcifications,
in addition to remove some remaining false detections. The system
is trained and tested on true digital data from Stavanger University
Hospital, and the results are evaluated by radiologists. The overall
results are promising, with a sensitivity > 90 % and a low false
detection rate (approx 1 unwanted pr. image, or 0.3 false pr. image).