Abstract: This paper presents features that characterize power
quality disturbances from recorded voltage waveforms using wavelet
transform. The discrete wavelet transform has been used to detect
and analyze power quality disturbances. The disturbances of interest
include sag, swell, outage and transient. A power system network has
been simulated by Electromagnetic Transients Program. Voltage
waveforms at strategic points have been obtained for analysis, which
includes different power quality disturbances. Then wavelet has been
chosen to perform feature extraction. The outputs of the feature
extraction are the wavelet coefficients representing the power quality
disturbance signal. Wavelet coefficients at different levels reveal the
time localizing information about the variation of the signal.
Abstract: Lutein is a dietary oxycarotenoid which is found
to reduce the risks of Age-related Macular Degeneration
(AMD). Supercritical fluid extraction of lutein esters from
marigold petals was carried out and was found to be much
effective than conventional solvent extraction. The
saponification of pre-concentrated lutein esters to produce free
lutein was studied which showed a composition of about 88%
total carotenoids (UV-VIS spectrophotometry) and 90.7%
lutein (HPLC). The lipase catalyzed hydrolysis of lutein esters
in conventional medium was investigated. The optimal
temperature, pH, enzyme concentration and water activity
were found to be 50°C, 7, 15% and 0.33 respectively and the
activity loss of lipase was about 25% after 8 times re-use in at
50°C for 12 days. However, the lipase catalyzed hydrolysis of
lutein esters in conventional media resulted in poor
conversions (16.4%).
Abstract: In modern human computer interaction systems
(HCI), emotion recognition is becoming an imperative characteristic.
The quest for effective and reliable emotion recognition in HCI has
resulted in a need for better face detection, feature extraction and
classification. In this paper we present results of feature space analysis
after briefly explaining our fully automatic vision based emotion
recognition method. We demonstrate the compactness of the feature
space and show how the 2d/3d based method achieves superior features
for the purpose of emotion classification. Also it is exposed that
through feature normalization a widely person independent feature
space is created. As a consequence, the classifier architecture has
only a minor influence on the classification result. This is particularly
elucidated with the help of confusion matrices. For this purpose
advanced classification algorithms, such as Support Vector Machines
and Artificial Neural Networks are employed, as well as the simple k-
Nearest Neighbor classifier.
Abstract: Content-based music retrieval generally involves analyzing, searching and retrieving music based on low or high level features of a song which normally used to represent artists, songs or music genre. Identifying them would normally involve feature extraction and classification tasks. Theoretically the greater features analyzed, the better the classification accuracy can be achieved but with longer execution time. Technique to select significant features is important as it will reduce dimensions of feature used in classification and contributes to the accuracy. Artificial Immune System (AIS) approach will be investigated and applied in the classification task. Bio-inspired audio content-based retrieval framework (B-ACRF) is proposed at the end of this paper where it embraces issues that need further consideration in music retrieval performances.
Abstract: This paper presents a new approach to tackle the problem of recognizing machine-printed Arabic texts. Because of the difficulty of recognizing cursive Arabic words, the text has to be normalized and segmented to be ready for the recognition stage. The new scheme for recognizing Arabic characters depends on multiple parallel neural networks classifier. The classifier has two phases. The first phase categories the input character into one of eight groups. The second phase classifies the character into one of the Arabic character classes in the group. The system achieved high recognition rate.
Abstract: Previously, harmonic parameters (HPs) have been
selected as features extracted from EEG signals for automatic sleep
scoring. However, in previous studies, only one HP parameter was
used, which were directly extracted from the whole epoch of EEG
signal.
In this study, two different transformations were applied to extract
HPs from EEG signals: Hilbert-Huang transform (HHT) and wavelet
transform (WT). EEG signals are decomposed by the two
transformations; and features were extracted from different
components. Twelve parameters (four sets of HPs) were extracted.
Some of the parameters are highly diverse among different stages.
Afterward, HPs from two transformations were used to building a
rough sleep stages scoring model using the classifier SVM. The
performance of this model is about 78% using the features obtained by
our proposed extractions. Our results suggest that these features may
be useful for automatic sleep stages scoring.
Abstract: Automatic Extraction of Event information from
social text stream (emails, social network sites, blogs etc) is a vital
requirement for many applications like Event Planning and
Management systems and security applications. The key information
components needed from Event related text are Event title, location,
participants, date and time. Emails have very unique distinctions over
other social text streams from the perspective of layout and format
and conversation style and are the most commonly used
communication channel for broadcasting and planning events.
Therefore we have chosen emails as our dataset. In our work, we
have employed two statistical NLP methods, named as Finite State
Machines (FSM) and Hidden Markov Model (HMM) for the
extraction of event related contextual information. An application
has been developed providing a comparison among the two methods
over the event extraction task. It comprises of two modules, one for
each method, and works for both bulk as well as direct user input.
The results are evaluated using Precision, Recall and F-Score.
Experiments show that both methods produce high performance and
accuracy, however HMM was good enough over Title extraction and
FSM proved to be better for Venue, Date, and time.
Abstract: The uses of road map in daily activities are numerous
but it is a hassle to construct and update a road map whenever there
are changes. In Universiti Malaysia Sarawak, research on Automatic
Road Extraction (ARE) was explored to solve the difficulties in
updating road map. The research started with using Satellite Image
(SI), or in short, the ARE-SI project. A Hybrid Simple Colour Space
Segmentation & Edge Detection (Hybrid SCSS-EDGE) algorithm
was developed to extract roads automatically from satellite-taken
images. In order to extract the road network accurately, the satellite
image must be analyzed prior to the extraction process. The
characteristics of these elements are analyzed and consequently the
relationships among them are determined. In this study, the road
regions are extracted based on colour space elements and edge details
of roads. Besides, edge detection method is applied to further filter
out the non-road regions. The extracted road regions are validated by
using a segmentation method. These results are valuable for building
road map and detecting the changes of the existing road database.
The proposed Hybrid Simple Colour Space Segmentation and Edge
Detection (Hybrid SCSS-EDGE) algorithm can perform the tasks
fully automatic, where the user only needs to input a high-resolution
satellite image and wait for the result. Moreover, this system can
work on complex road network and generate the extraction result in
seconds.
Abstract: Mel Frequency Cepstral Coefficient (MFCC) features
are widely used as acoustic features for speech recognition as well
as speaker recognition. In MFCC feature representation, the Mel frequency
scale is used to get a high resolution in low frequency region,
and a low resolution in high frequency region. This kind of processing
is good for obtaining stable phonetic information, but not suitable
for speaker features that are located in high frequency regions. The
speaker individual information, which is non-uniformly distributed
in the high frequencies, is equally important for speaker recognition.
Based on this fact we proposed an admissible wavelet packet based
filter structure for speaker identification. Multiresolution capabilities
of wavelet packet transform are used to derive the new features.
The proposed scheme differs from previous wavelet based works,
mainly in designing the filter structure. Unlike others, the proposed
filter structure does not follow Mel scale. The closed-set speaker
identification experiments performed on the TIMIT database shows
improved identification performance compared to other commonly
used Mel scale based filter structures using wavelets.
Abstract: The most reliable and accurate description of the actual behavior of a software system is its source code. However, not all questions about the system can be answered directly by resorting to this repository of information. What the reverse engineering methodology aims at is the extraction of abstract, goal-oriented “views" of the system, able to summarize relevant properties of the computation performed by the program. While concentrating on reverse engineering we had modeled the C++ files by designing the translator.
Abstract: Graphene-metal contact resistance limits the performance of graphene-based electrical devices. In this work, we have fabricated both graphene field-effect transistors (GFET) and transfer length measurement (TLM) test devices with titanium contacts. The purpose of this work is to compare the contact resistances that can be numerically extracted from the GFETs and measured from the TLM structures. We also provide a brief review of the work done in the field to solve the contact resistance problem.
Abstract: This paper describes a new supervised fusion (hybrid)
electrocardiogram (ECG) classification solution consisting of a new
QRS complex geometrical feature extraction as well as a new version
of the learning vector quantization (LVQ) classification algorithm
aimed for overcoming the stability-plasticity dilemma. Toward this
objective, after detection and delineation of the major events of ECG
signal via an appropriate algorithm, each QRS region and also its
corresponding discrete wavelet transform (DWT) are supposed as
virtual images and each of them is divided into eight polar sectors.
Then, the curve length of each excerpted segment is calculated
and is used as the element of the feature space. To increase the
robustness of the proposed classification algorithm versus noise,
artifacts and arrhythmic outliers, a fusion structure consisting of
five different classifiers namely as Support Vector Machine (SVM),
Modified Learning Vector Quantization (MLVQ) and three Multi
Layer Perceptron-Back Propagation (MLP–BP) neural networks with
different topologies were designed and implemented. The new proposed
algorithm was applied to all 48 MIT–BIH Arrhythmia Database
records (within–record analysis) and the discrimination power of the
classifier in isolation of different beat types of each record was
assessed and as the result, the average accuracy value Acc=98.51%
was obtained. Also, the proposed method was applied to 6 number
of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging
to 20 different records of the aforementioned database (between–
record analysis) and the average value of Acc=95.6% was achieved.
To evaluate performance quality of the new proposed hybrid learning
machine, the obtained results were compared with similar peer–
reviewed studies in this area.
Abstract: Proper management of residues originated from
industrial activities is considered as one of the serious challenges
faced by industrial societies due to their potential hazards to the
environment. Common disposal methods for industrial solid wastes
(ISWs) encompass various combinations of solely management
options, i.e. recycling, incineration, composting, and sanitary
landfilling. Indeed, the procedure used to evaluate and nominate the
best practical methods should be based on environmental, technical,
economical, and social assessments. In this paper an environmentaltechnical
assessment model is developed using analytical network
process (ANP) to facilitate the decision making practice for ISWs
generated at Gilan province, Iran. Using the results of performed
surveys on industrial units located at Gilan, the various groups of
solid wastes in the research area were characterized, and four
different ISW management scenarios were studied. The evaluation
process was conducted using the above-mentioned model in the
Super Decisions software (version 2.0.8) environment. The results
indicates that the best ISW management scenario for Gilan province
is consist of recycling the metal industries residues, composting the
putrescible portion of ISWs, combustion of paper, wood, fabric and
polymeric wastes as well as energy extraction in the incineration
plant, and finally landfilling the rest of the waste stream in addition
with rejected materials from recycling and compost production plants
and ashes from the incineration unit.
Abstract: A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This paper proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature set outperforms baseline MFCC significantly. This proposition is validated by experiments conducted on two different kinds of public databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian Mixture Models (GMM) as a Classifier for various model orders.
Abstract: This paper presents a system for discovering
association rules from collections of unstructured documents called
EART (Extract Association Rules from Text). The EART system
treats texts only not images or figures. EART discovers association
rules amongst keywords labeling the collection of textual documents.
The main characteristic of EART is that the system integrates XML
technology (to transform unstructured documents into structured
documents) with Information Retrieval scheme (TF-IDF) and Data
Mining technique for association rules extraction. EART depends on
word feature to extract association rules. It consists of four phases:
structure phase, index phase, text mining phase and visualization
phase. Our work depends on the analysis of the keywords in the
extracted association rules through the co-occurrence of the keywords
in one sentence in the original text and the existing of the keywords
in one sentence without co-occurrence. Experiments applied on a
collection of scientific documents selected from MEDLINE that are
related to the outbreak of H5N1 avian influenza virus.
Abstract: Date palm (Phoenix dactylifera L.) seeds are waste streams which are considered a major problem to the food industry. They contain potentially useful protein (10-15% of the whole date-s weight). Global production, industrialisation and utilisation of dates are increasing steadily. The worldwide production of date palm fruit has increased from 1.8 million tons in 1961 to 6.9 million tons in 2005, thus from the global production of dates are almost 800.000 tonnes of date palm seeds are not currently used [1]. The current study was carried out to convert the date palm seeds into useful protein powder. Compositional analysis showed that the seeds were rich in protein and fat 5.64 and 8.14% respectively. We used several laboratory scale methods to extract proteins from seed to produce a high protein powder. These methods included simple acid or alkali extraction, with or without ultrafiltration and phenol trichloroacetic acid with acetone precipitation (Ph/TCA method). The highest protein content powder (68%) was obtained by Ph/TCA method with yield of material (44%) whereas; the use of just alkali extraction gave the lowest protein content of 8%, and a yield of 32%.
Abstract: In this paper, we propose an approach of unsupervised
segmentation with fuzzy connectedness. Valid seeds are first specified
by an unsupervised method based on scale space theory. A region is
then extracted for each seed with a relative object extraction method of
fuzzy connectedness. Afterwards, regions are merged according to the
values between them of an introduced measure. Some theorems and
propositions are also provided to show the reasonableness of the
measure for doing mergence. Experiment results on a synthetic image,
a color image and a large amount of MR images of our method are
reported.
Abstract: Extraction of edge-end-pixels is an important step for the edge linking process to achieve edge-based image segmentation. This paper presents an algorithm to extract edge-end pixels together with their directional sensitivities as an augmentation to the currently available mathematical models. The algorithm is implemented in the Java environment because of its inherent compatibility with web interfaces since its main use is envisaged to be for remote image analysis on a virtual instrumentation platform.
Abstract: Detection, feature extraction and pose estimation of
people in images and video is made challenging by the variability of
human appearance, the complexity of natural scenes and the high
dimensionality of articulated body models and also the important
field in Image, Signal and Vision Computing in recent years. In this
paper, four types of people in 2D dimension image will be tested and
proposed. The system will extract the size and the advantage of them
(such as: tall fat, short fat, tall thin and short thin) from image. Fat
and thin, according to their result from the human body that has been
extract from image, will be obtained. Also the system extract every
size of human body such as length, width and shown them in output.
Abstract: A neurofuzzy approach for a given set of input-output training data is proposed in two phases. Firstly, the data set is partitioned automatically into a set of clusters. Then a fuzzy if-then rule is extracted from each cluster to form a fuzzy rule base. Secondly, a fuzzy neural network is constructed accordingly and parameters are tuned to increase the precision of the fuzzy rule base. This network is able to learn and optimize the rule base of a Sugeno like Fuzzy inference system using Hybrid learning algorithm, which combines gradient descent, and least mean square algorithm. This proposed neurofuzzy system has the advantage of determining the number of rules automatically and also reduce the number of rules, decrease computational time, learns faster and consumes less memory. The authors also investigate that how neurofuzzy techniques can be applied in the area of control theory to design a fuzzy controller for linear and nonlinear dynamic systems modelling from a set of input/output data. The simulation analysis on a wide range of processes, to identify nonlinear components on-linely in a control system and a benchmark problem involving the prediction of a chaotic time series is carried out. Furthermore, the well-known examples of linear and nonlinear systems are also simulated under the Matlab/Simulink environment. The above combination is also illustrated in modeling the relationship between automobile trips and demographic factors.