Abstract: In this paper three different approaches for person
verification and identification, i.e. by means of fingerprints, face and
voice recognition, are studied. Face recognition uses parts-based
representation methods and a manifold learning approach. The
assessment criterion is recognition accuracy. The techniques under
investigation are: a) Local Non-negative Matrix Factorization
(LNMF); b) Independent Components Analysis (ICA); c) NMF with
sparse constraints (NMFsc); d) Locality Preserving Projections
(Laplacianfaces). Fingerprint detection was approached by classical
minutiae (small graphical patterns) matching through image
segmentation by using a structural approach and a neural network as
decision block. As to voice / speaker recognition, melodic cepstral
and delta delta mel cepstral analysis were used as main methods, in
order to construct a supervised speaker-dependent voice recognition
system. The final decision (e.g. “accept-reject" for a verification
task) is taken by using a majority voting technique applied to the
three biometrics. The preliminary results, obtained for medium
databases of fingerprints, faces and voice recordings, indicate the
feasibility of our study and an overall recognition precision (about
92%) permitting the utilization of our system for a future complex
biometric card.
Abstract: In this paper, a self starting two step continuous block
hybrid formulae (CBHF) with four Off-step points is developed using
collocation and interpolation procedures. The CBHF is then used to
produce multiple numerical integrators which are of uniform order
and are assembled into a single block matrix equation. These
equations are simultaneously applied to provide the approximate
solution for the stiff ordinary differential equations. The order of
accuracy and stability of the block method is discussed and its
accuracy is established numerically.
Abstract: High Strength Concrete (HSC) is defined as concrete
that meets special combination of performance and uniformity
requirements that cannot be achieved routinely using conventional
constituents and normal mixing, placing, and curing procedures. It is
a highly complex material, which makes modeling its behavior a very
difficult task. This paper aimed to show possible applicability of
Neural Networks (NN) to predict the slump in High Strength
Concrete (HSC). Neural Network models is constructed, trained and
tested using the available test data of 349 different concrete mix
designs of High Strength Concrete (HSC) gathered from a particular
Ready Mix Concrete (RMC) batching plant. The most versatile
Neural Network model is selected to predict the slump in concrete.
The data used in the Neural Network models are arranged in a format
of eight input parameters that cover the Cement, Fly Ash, Sand,
Coarse Aggregate (10 mm), Coarse Aggregate (20 mm), Water,
Super-Plasticizer and Water/Binder ratio. Furthermore, to test the
accuracy for predicting slump in concrete, the final selected model is
further used to test the data of 40 different concrete mix designs of
High Strength Concrete (HSC) taken from the other batching plant.
The results are compared on the basis of error function (or
performance function).
Abstract: 98% of the energy needed in Taiwan has been
imported. The prices of petroleum and electricity have been
increasing. In addition, facility capacity, amount of electricity
generation, amount of electricity consumption and number of Taiwan
Power Company customers have continued to increase. For these
reasons energy conservation has become an important topic. In the
past linear regression was used to establish the power consumption
models for chillers. In this study, grey prediction is used to evaluate
the power consumption of a chiller so as to lower the total power
consumption at peak-load (so that the relevant power providers do not
need to keep on increasing their power generation capacity and facility
capacity).
In grey prediction, only several numerical values (at least four
numerical values) are needed to establish the power consumption
models for chillers. If PLR, the temperatures of supply chilled-water
and return chilled-water, and the temperatures of supply cooling-water
and return cooling-water are taken into consideration, quite accurate
results (with the accuracy close to 99% for short-term predictions)
may be obtained. Through such methods, we can predict whether the
power consumption at peak-load will exceed the contract power
capacity signed by the corresponding entity and Taiwan Power
Company. If the power consumption at peak-load exceeds the power
demand, the temperature of the supply chilled-water may be adjusted
so as to reduce the PLR and hence lower the power consumption.
Abstract: The two-dimensional gel electrophoresis method
(2-DE) is widely used in Proteomics to separate thousands of proteins
in a sample. By comparing the protein expression levels of proteins in
a normal sample with those in a diseased one, it is possible to identify
a meaningful set of marker proteins for the targeted disease. The major
shortcomings of this approach involve inherent noises and irregular
geometric distortions of spots observed in 2-DE images. Various
experimental conditions can be the major causes of these problems. In
the protein analysis of samples, these problems eventually lead to
incorrect conclusions. In order to minimize the influence of these
problems, this paper proposes a partition based pair extension method
that performs spot-matching on a set of gel images multiple times and
segregates more reliable mapping results which can improve the
accuracy of gel image analysis. The improved accuracy of the
proposed method is analyzed through various experiments on real
2-DE images of human liver tissues.
Abstract: Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.
Abstract: Microarray data profiles gene expression on a whole
genome scale, therefore, it provides a good way to study associations
between gene expression and occurrence or progression of cancer.
More and more researchers realized that microarray data is helpful
to predict cancer sample. However, the high dimension of gene
expressions is much larger than the sample size, which makes this
task very difficult. Therefore, how to identify the significant genes
causing cancer becomes emergency and also a hot and hard research
topic. Many feature selection algorithms have been proposed in
the past focusing on improving cancer predictive accuracy at the
expense of ignoring the correlations between the features. In this
work, a novel framework (named by SGS) is presented for stable gene
selection and efficient cancer prediction . The proposed framework
first performs clustering algorithm to find the gene groups where
genes in each group have higher correlation coefficient, and then
selects the significant genes in each group with Bayesian Lasso and
important gene groups with group Lasso, and finally builds prediction
model based on the shrinkage gene space with efficient classification
algorithm (such as, SVM, 1NN, Regression and etc.). Experiment
results on real world data show that the proposed framework often
outperforms the existing feature selection and prediction methods,
say SAM, IG and Lasso-type prediction model.
Abstract: Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.
Abstract: The feature of HIV genome is in a wide range because
of it is highly heterogeneous. Hence, the infection ability of the virus changes related with different chemokine receptors. From this point,
R5 and X4 HIV viruses use CCR5 and CXCR5 coreceptors respectively while R5X4 viruses can utilize both coreceptors. Recently, in Bioinformatics, R5X4 viruses have been studied to
classify by using the coreceptors of HIV genome.
The aim of this study is to develop the optimal Multilayer
Perceptron (MLP) for high classification accuracy of HIV sub-type viruses. To accomplish this purpose, the unit number in hidden layer
was incremented one by one, from one to a particular number. The statistical data of R5X4, R5 and X4 viruses was preprocessed by the
signal processing methods. Accessible residues of these virus sequences were extracted and modeled by Auto-Regressive Model
(AR) due to the dimension of residues is large and different from each other. Finally the pre-processed dataset was used to evolve MLP with various number of hidden units to determine R5X4
viruses. Furthermore, ROC analysis was used to figure out the optimal MLP structure.
Abstract: This report aims to utilize existing and future Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing Wireless Local Area Network (MIMO-OFDM WLAN) systems characteristics–such as multiple subcarriers, multiple antennas, and channel estimation characteristics–for indoor location estimation systems based on the Direction of Arrival (DOA) and Radio Signal Strength Indication (RSSI) methods. Hybrid of DOA-RSSI methods also evaluated. In the experimental data result, we show that location estimation accuracy performances can be increased by minimizing the multipath fading effect. This is done using multiple subcarrier frequencies over wideband frequencies to estimate one location. The proposed methods are analyzed in both a wide indoor environment and a typical room-sized office. In the experiments, WLAN terminal locations are estimated by measuring multiple subcarriers from arrays of three dipole antennas of access points (AP). This research demonstrates highly accurate, robust and hardware-free add-on software for indoor location estimations based on a MIMO-OFDM WLAN system.
Abstract: The manufacture of large-scale precision aerospace
components using CNC requires a highly effective maintenance
strategy to ensure that the required accuracy can be achieved over
many hours of production. This paper reviews a strategy for a
maintenance management system based on Failure Mode Avoidance,
which uses advanced techniques and technologies to underpin a
predictive maintenance strategy. It is shown how condition
monitoring (CM) is important to predict potential failures in high
precision machining facilities and achieve intelligent and integrated
maintenance management. There are two distinct ways in which CM
can be applied. One is to monitor key process parameters and
observe trends which may indicate a gradual deterioration of
accuracy in the product. The other is the use of CM techniques to
monitor high status machine parameters enables trends to be
observed which can be corrected before machine failure and
downtime occurs.
It is concluded that the key to developing a flexible and intelligent
maintenance framework in any precision manufacturing operation is
the ability to evaluate reliably and routinely machine tool condition
using condition monitoring techniques within a framework of Failure
Mode Avoidance.
Abstract: The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.
Abstract: In the past decade, artificial neural networks (ANNs)
have been regarded as an instrument for problem-solving and
decision-making; indeed, they have already done with a substantial
efficiency and effectiveness improvement in industries and businesses.
In this paper, the Back-Propagation neural Networks (BPNs) will be
modulated to demonstrate the performance of the collaborative
forecasting (CF) function of a Collaborative Planning, Forecasting and
Replenishment (CPFR®) system. CPFR functions the balance between
the sufficient product supply and the necessary customer demand in a
Supply and Demand Chain (SDC). Several classical standard BPN will
be grouped, collaborated and exploited for the easy implementation of
the proposed modular ANN framework based on the topology of a
SDC. Each individual BPN is applied as a modular tool to perform the
task of forecasting SKUs (Stock-Keeping Units) levels that are
managed and supervised at a POS (point of sale), a wholesaler, and a
manufacturer in an SDC. The proposed modular BPN-based CF
system will be exemplified and experimentally verified using lots of
datasets of the simulated SDC. The experimental results showed that a
complex CF problem can be divided into a group of simpler
sub-problems based on the single independent trading partners
distributed over SDC, and its SKU forecasting accuracy was satisfied
when the system forecasted values compared to the original simulated
SDC data. The primary task of implementing an autonomous CF
involves the study of supervised ANN learning methodology which
aims at making “knowledgeable" decision for the best SKU sales plan
and stocks management.
Abstract: In this paper, a neural tree (NT) classifier having a
simple perceptron at each node is considered. A new concept for
making a balanced tree is applied in the learning algorithm of the
tree. At each node, if the perceptron classification is not accurate and
unbalanced, then it is replaced by a new perceptron. This separates
the training set in such a way that almost the equal number of patterns
fall into each of the classes. Moreover, each perceptron is trained only
for the classes which are present at respective node and ignore other
classes. Splitting nodes are employed into the neural tree architecture
to divide the training set when the current perceptron node repeats
the same classification of the parent node. A new error function based
on the depth of the tree is introduced to reduce the computational
time for the training of a perceptron. Experiments are performed to
check the efficiency and encouraging results are obtained in terms of
accuracy and computational costs.
Abstract: Wireless location is to determine the mobile station (MS) location in a wireless cellular communications system. When fewer base stations (BSs) may be available for location purposes or the measurements with large errors in non-line-of-sight (NLOS) environments, it is necessary to integrate all available heterogeneous measurements to achieve high location accuracy. This paper illustrates a hybrid proposed schemes that combine time of arrival (TOA) at three BSs and angle of arrival (AOA) information at the serving BS to give a location estimate of the MS. The proposed schemes mitigate the NLOS effect simply by the weighted sum of the intersections between three TOA circles and the AOA line without requiring a priori information about the NLOS error. Simulation results show that the proposed methods can achieve better accuracy when compare with Taylor series algorithm (TSA) and the hybrid lines of position algorithm (HLOP).
Abstract: Although face recognition seems as an easy task for
human, automatic face recognition is a much more challenging task
due to variations in time, illumination and pose. In this paper, the
influence of time-lapse on visible and thermal images is examined.
Orthogonal moment invariants are used as a feature extractor to
analyze the effect of time-lapse on thermal and visible images and the
results are compared with conventional Principal Component
Analysis (PCA). A new triangle square ratio criterion is employed
instead of Euclidean distance to enhance the performance of nearest
neighbor classifier. The results of this study indicate that the ideal
feature vectors can be represented with high discrimination power
due to the global characteristic of orthogonal moment invariants.
Moreover, the effect of time-lapse has been decreasing and enhancing
the accuracy of face recognition considerably in comparison with
PCA. Furthermore, our experimental results based on moment
invariant and triangle square ratio criterion show that the proposed
approach achieves on average 13.6% higher in recognition rate than
PCA.
Abstract: Dengue fever is prevalent in Malaysia with numerous
cases including mortality recorded over the years. Public education
on the prevention of the desease through various means has been
carried out besides the enforcement of legal means to eradicate
Aedes mosquitoes, the dengue vector breeding ground. Hence, other
means need to be explored, such as predicting the seasonal peak
period of the dengue outbreak and identifying related climate factors
contributing to the increase in the number of mosquitoes. Simulation
model can be employed for this purpose. In this study, we created a
simulation of system dynamic to predict the spread of dengue
outbreak in Hulu Langat, Selangor Malaysia. The prototype was
developed using STELLA 9.1.2 software. The main data input are
rainfall, temperature and denggue cases. Data analysis from the graph
showed that denggue cases can be predicted accurately using these
two main variables- rainfall and temperature. However, the model
will be further tested over a longer time period to ensure its
accuracy, reliability and efficiency as a prediction tool for dengue
outbreak.
Abstract: In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It-s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor-s expertise and experience.But still cases are reported of wrong diagnosis and treatment. Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%.