Abstract: In this paper, the implementation of a rule-based
intuitive reasoner is presented. The implementation included two
parts: the rule induction module and the intuitive reasoner. A large
weather database was acquired as the data source. Twelve weather
variables from those data were chosen as the “target variables"
whose values were predicted by the intuitive reasoner. A “complex"
situation was simulated by making only subsets of the data available
to the rule induction module. As a result, the rules induced were
based on incomplete information with variable levels of certainty.
The certainty level was modeled by a metric called "Strength of
Belief", which was assigned to each rule or datum as ancillary
information about the confidence in its accuracy. Two techniques
were employed to induce rules from the data subsets: decision tree
and multi-polynomial regression, respectively for the discrete and the
continuous type of target variables. The intuitive reasoner was tested
for its ability to use the induced rules to predict the classes of the
discrete target variables and the values of the continuous target
variables. The intuitive reasoner implemented two types of
reasoning: fast and broad where, by analogy to human thought, the
former corresponds to fast decision making and the latter to deeper
contemplation. . For reference, a weather data analysis approach
which had been applied on similar tasks was adopted to analyze the
complete database and create predictive models for the same 12
target variables. The values predicted by the intuitive reasoner and
the reference approach were compared with actual data. The intuitive
reasoner reached near-100% accuracy for two continuous target
variables. For the discrete target variables, the intuitive reasoner
predicted at least 70% as accurately as the reference reasoner. Since
the intuitive reasoner operated on rules derived from only about 10%
of the total data, it demonstrated the potential advantages in dealing
with sparse data sets as compared with conventional methods.
Abstract: General requirements for knowledge representation in
the form of logic rules, applicable to design and control of industrial
processes, are formulated. Characteristic behavior of decision trees
(DTs) and rough sets theory (RST) in rules extraction from recorded
data is discussed and illustrated with simple examples. The
significance of the models- drawbacks was evaluated, using
simulated and industrial data sets. It is concluded that performance of
DTs may be considerably poorer in several important aspects,
compared to RST, particularly when not only a characterization of a
problem is required, but also detailed and precise rules are needed,
according to actual, specific problems to be solved.
Abstract: Steel surface defect detection is essentially one of
pattern recognition problems. Support Vector Machines (SVMs) are
known as one of the most proper classifiers in this application. In this
paper, we introduce a more accurate classification method by using
SVMs as our final classifier of the inspection system. In this scheme,
multiclass classification task is performed based on the "one-againstone"
method and different kernels are utilized for each pair of the
classes in multiclass classification of the different defects.
In the proposed system, a decision tree is employed in the first
stage for two-class classification of the steel surfaces to "defect" and
"non-defect", in order to decrease the time complexity. Based on
the experimental results, generated from over one thousand images,
the proposed multiclass classification scheme is more accurate than
the conventional methods and the overall system yields a sufficient
performance which can meet the requirements in steel manufacturing.
Abstract: Recently, information security has become a key issue
in information technology as the number of computer security
breaches are exposed to an increasing number of security threats. A
variety of intrusion detection systems (IDS) have been employed for
protecting computers and networks from malicious network-based or
host-based attacks by using traditional statistical methods to new data
mining approaches in last decades. However, today's commercially
available intrusion detection systems are signature-based that are not
capable of detecting unknown attacks. In this paper, we present a
new learning algorithm for anomaly based network intrusion
detection system using decision tree algorithm that distinguishes
attacks from normal behaviors and identifies different types of
intrusions. Experimental results on the KDD99 benchmark network
intrusion detection dataset demonstrate that the proposed learning
algorithm achieved 98% detection rate (DR) in comparison with
other existing methods.
Abstract: This paper presents the methodology from machine
learning approaches for short-term rain forecasting system. Decision
Tree, Artificial Neural Network (ANN), and Support Vector Machine
(SVM) were applied to develop classification and prediction models
for rainfall forecasts. The goals of this presentation are to
demonstrate (1) how feature selection can be used to identify the
relationships between rainfall occurrences and other weather
conditions and (2) what models can be developed and deployed for
predicting the accurate rainfall estimates to support the decisions to
launch the cloud seeding operations in the northeastern part of
Thailand. Datasets collected during 2004-2006 from the
Chalermprakiat Royal Rain Making Research Center at Hua Hin,
Prachuap Khiri khan, the Chalermprakiat Royal Rain Making
Research Center at Pimai, Nakhon Ratchasima and Thai
Meteorological Department (TMD). A total of 179 records with 57
features was merged and matched by unique date. There are three
main parts in this work. Firstly, a decision tree induction algorithm
(C4.5) was used to classify the rain status into either rain or no-rain.
The overall accuracy of classification tree achieves 94.41% with the
five-fold cross validation. The C4.5 algorithm was also used to
classify the rain amount into three classes as no-rain (0-0.1 mm.),
few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall
accuracy of classification tree achieves 62.57%. Secondly, an ANN
was applied to predict the rainfall amount and the root mean square
error (RMSE) were used to measure the training and testing errors of
the ANN. It is found that the ANN yields a lower RMSE at 0.171 for
daily rainfall estimates, when compared to next-day and next-2-day
estimation. Thirdly, the ANN and SVM techniques were also used to
classify the rain amount into three classes as no-rain, few-rain, and
moderate-rain as above. The results achieved in 68.15% and 69.10%
of overall accuracy of same-day prediction for the ANN and SVM
models, respectively. The obtained results illustrated the comparison
of the predictive power of different methods for rainfall estimation.
Abstract: Many supervised induction algorithms require discrete
data, even while real data often comes in a discrete
and continuous formats. Quality discretization of continuous
attributes is an important problem that has effects on speed,
accuracy and understandability of the induction models. Usually,
discretization and other types of statistical processes are applied
to subsets of the population as the entire population is practically
inaccessible. For this reason we argue that the discretization
performed on a sample of the population is only an estimate of
the entire population. Most of the existing discretization methods,
partition the attribute range into two or several intervals using
a single or a set of cut points. In this paper, we introduce a
technique by using resampling (such as bootstrap) to generate
a set of candidate discretization points and thus, improving the
discretization quality by providing a better estimation towards
the entire population. Thus, the goal of this paper is to observe
whether the resampling technique can lead to better discretization
points, which opens up a new paradigm to construction of
soft decision trees.
Abstract: In this paper a combined feature selection method is
proposed which takes advantages of sample domain filtering,
resampling and feature subset evaluation methods to reduce
dimensions of huge datasets and select reliable features. This method
utilizes both feature space and sample domain to improve the process
of feature selection and uses a combination of Chi squared with
Consistency attribute evaluation methods to seek reliable features.
This method consists of two phases. The first phase filters and
resamples the sample domain and the second phase adopts a hybrid
procedure to find the optimal feature space by applying Chi squared,
Consistency subset evaluation methods and genetic search.
Experiments on various sized datasets from UCI Repository of
Machine Learning databases show that the performance of five
classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First
Decision Tree and JRIP) improves simultaneously and the
classification error for these classifiers decreases considerably. The
experiments also show that this method outperforms other feature
selection methods.
Abstract: Recommender systems are usually regarded as an
important marketing tool in the e-commerce. They use important
information about users to facilitate accurate recommendation. The
information includes user context such as location, time and interest
for personalization of mobile users. We can easily collect information
about location and time because mobile devices communicate with the
base station of the service provider. However, information about user
interest can-t be easily collected because user interest can not be
captured automatically without user-s approval process. User interest
usually represented as a need. In this study, we classify needs into two
types according to prior research. This study investigates the
usefulness of data mining techniques for classifying user need type for
recommendation systems. We employ several data mining techniques
including artificial neural networks, decision trees, case-based
reasoning, and multivariate discriminant analysis. Experimental
results show that CHAID algorithm outperforms other models for
classifying user need type. This study performs McNemar test to
examine the statistical significance of the differences of classification
results. The results of McNemar test also show that CHAID performs
better than the other models with statistical significance.
Abstract: This paper investigates the issue of building decision
trees from data with imprecise class values where imprecision is
encoded in the form of possibility distributions. The Information
Affinity similarity measure is introduced into the well-known gain
ratio criterion in order to assess the homogeneity of a set of
possibility distributions representing instances-s classes belonging to
a given training partition. For the experimental study, we proposed an
information affinity based performance criterion which we have used
in order to show the performance of the approach on well-known
benchmarks.
Abstract: Data Mining aims at discovering knowledge out of
data and presenting it in a form that is easily comprehensible to
humans. One of the useful applications in Egypt is the Cancer
management, especially the management of Acute Lymphoblastic
Leukemia or ALL, which is the most common type of cancer in
children.
This paper discusses the process of designing a prototype that can
help in the management of childhood ALL, which has a great
significance in the health care field. Besides, it has a social impact
on decreasing the rate of infection in children in Egypt. It also
provides valubale information about the distribution and
segmentation of ALL in Egypt, which may be linked to the possible
risk factors.
Undirected Knowledge Discovery is used since, in the case of this
research project, there is no target field as the data provided is
mainly subjective. This is done in order to quantify the subjective
variables. Therefore, the computer will be asked to identify
significant patterns in the provided medical data about ALL. This
may be achieved through collecting the data necessary for the
system, determimng the data mining technique to be used for the
system, and choosing the most suitable implementation tool for the
domain.
The research makes use of a data mining tool, Clementine, so as to
apply Decision Trees technique. We feed it with data extracted from
real-life cases taken from specialized Cancer Institutes. Relevant
medical cases details such as patient medical history and diagnosis
are analyzed, classified, and clustered in order to improve the disease
management.
Abstract: The γ-turns play important roles in protein folding and
molecular recognition. The prediction and analysis of γ-turn types are
important for both protein structure predictions and better
understanding the characteristics of different γ-turn types. This study
proposed a physicochemical property-based decision tree (PPDT)
method to interpretably predict γ-turn types. In addition to the good
prediction performance of PPDT, three simple and human
interpretable IF-THEN rules are extracted from the decision tree
constructed by PPDT. The identified informative physicochemical
properties and concise rules provide a simple way for discriminating
and understanding γ-turn types.
Abstract: Leo Breimans Random Forests (RF) is a recent
development in tree based classifiers and quickly proven to be one of
the most important algorithms in the machine learning literature. It
has shown robust and improved results of classifications on standard
data sets. Ensemble learning algorithms such as AdaBoost and
Bagging have been in active research and shown improvements in
classification results for several benchmarking data sets with mainly
decision trees as their base classifiers. In this paper we experiment to
apply these Meta learning techniques to the random forests. We
experiment the working of the ensembles of random forests on the
standard data sets available in UCI data sets. We compare the
original random forest algorithm with their ensemble counterparts
and discuss the results.
Abstract: The belief decision tree (BDT) approach is a decision
tree in an uncertain environment where the uncertainty is represented
through the Transferable Belief Model (TBM), one interpretation
of the belief function theory. The uncertainty can appear either in
the actual class of training objects or attribute values of objects to
classify. In this paper, we develop a post-pruning method of belief
decision trees in order to reduce size and improve classification
accuracy on unseen cases. The pruning of decision tree has a
considerable intention in the areas of machine learning.
Abstract: We proposed a technique to identify road traffic
congestion levels from velocity of mobile sensors with high accuracy
and consistent with motorists- judgments. The data collection utilized
a GPS device, a webcam, and an opinion survey. Human perceptions
were used to rate the traffic congestion levels into three levels: light,
heavy, and jam. Then the ratings and velocity were fed into a
decision tree learning model (J48). We successfully extracted vehicle
movement patterns to feed into the learning model using a sliding
windows technique. The parameters capturing the vehicle moving
patterns and the windows size were heuristically optimized. The
model achieved accuracy as high as 99.68%. By implementing the
model on the existing traffic report systems, the reports will cover
comprehensive areas. The proposed method can be applied to any
parts of the world.
Abstract: Random Forests are a powerful classification technique, consisting of a collection of decision trees. One useful feature of Random Forests is the ability to determine the importance of each variable in predicting the outcome. This is done by permuting each variable and computing the change in prediction accuracy before and after the permutation. This variable importance calculation is similar to a one-factor-at a time experiment and therefore is inefficient. In this paper, we use a regular fractional factorial design to determine which variables to permute. Based on the results of the trials in the experiment, we calculate the individual importance of the variables, with improved precision over the standard method. The method is illustrated with a study of student attrition at Monash University.
Abstract: This paper discusses the designing of knowledge
integration of clinical information extracted from distributed medical
ontologies in order to ameliorate a machine learning-based multilabel
coding assignment system. The proposed approach is
implemented using a decision tree technique of the machine learning
on the university hospital data for patients with Coronary Heart
Disease (CHD). The preliminary results obtained show a satisfactory
finding that the use of medical ontologies improves the overall
system performance.
Abstract: Feature selection study is gaining importance due to its contribution to save classification cost in terms of time and computation load. In search of essential features, one of the methods to search the features is via the decision tree. Decision tree act as an intermediate feature space inducer in order to choose essential features. In decision tree-based feature selection, some studies used decision tree as a feature ranker with a direct threshold measure, while others remain the decision tree but utilized pruning condition that act as a threshold mechanism to choose features. This paper proposed threshold measure using Manhattan Hierarchical Cluster distance to be utilized in feature ranking in order to choose relevant features as part of the feature selection process. The result is promising, and this method can be improved in the future by including test cases of a higher number of attributes.
Abstract: The aim of this paper is to identify the most suitable
model for churn prediction based on three different techniques. The
paper identifies the variables that affect churn in reverence of
customer complaints data and provides a comparative analysis of
neural networks, regression trees and regression in their capabilities
of predicting customer churn.
Abstract: In this paper, we present a new learning algorithm for
anomaly based network intrusion detection using improved self
adaptive naïve Bayesian tree (NBTree), which induces a hybrid of
decision tree and naïve Bayesian classifier. The proposed approach
scales up the balance detections for different attack types and keeps
the false positives at acceptable level in intrusion detection. In
complex and dynamic large intrusion detection dataset, the detection
accuracy of naïve Bayesian classifier does not scale up as well as
decision tree. It has been successfully tested in other problem
domains that naïve Bayesian tree improves the classification rates in
large dataset. In naïve Bayesian tree nodes contain and split as
regular decision-trees, but the leaves contain naïve Bayesian
classifiers. The experimental results on KDD99 benchmark network
intrusion detection dataset demonstrate that this new approach scales
up the detection rates for different attack types and reduces false
positives in network intrusion detection.
Abstract: Recently, the issue of machine condition monitoring
and fault diagnosis as a part of maintenance system became global
due to the potential advantages to be gained from reduced
maintenance costs, improved productivity and increased machine
availability. The aim of this work is to investigate the effectiveness
of a new fault diagnosis method based on power spectral density
(PSD) of vibration signals in combination with decision trees and
fuzzy inference system (FIS). To this end, a series of studies was
conducted on an external gear hydraulic pump. After a test under
normal condition, a number of different machine defect conditions
were introduced for three working levels of pump speed (1000, 1500,
and 2000 rpm), corresponding to (i) Journal-bearing with inner face
wear (BIFW), (ii) Gear with tooth face wear (GTFW), and (iii)
Journal-bearing with inner face wear plus Gear with tooth face wear
(B&GW). The features of PSD values of vibration signal were
extracted using descriptive statistical parameters. J48 algorithm is
used as a feature selection procedure to select pertinent features from
data set. The output of J48 algorithm was employed to produce the
crisp if-then rule and membership function sets. The structure of FIS
classifier was then defined based on the crisp sets. In order to
evaluate the proposed PSD-J48-FIS model, the data sets obtained
from vibration signals of the pump were used. Results showed that
the total classification accuracy for 1000, 1500, and 2000 rpm
conditions were 96.42%, 100%, and 96.42% respectively. The results
indicate that the combined PSD-J48-FIS model has the potential for
fault diagnosis of hydraulic pumps.