Abstract: The counting process of cell colonies is always a long
and laborious process that is dependent on the judgment and ability
of the operator. The judgment of the operator in counting can vary in
relation to fatigue. Moreover, since this activity is time consuming it
can limit the usable number of dishes for each experiment. For these
purposes, it is necessary that an automatic system of cell colony
counting is used. This article introduces a new automatic system of
counting based on the elaboration of the digital images of cellular
colonies grown on petri dishes. This system is mainly based on the
algorithms of region-growing for the recognition of the regions of
interest (ROI) in the image and a Sanger neural net for the
characterization of such regions. The better final classification is
supplied from a Feed-Forward Neural Net (FF-NN) and confronted
with the K-Nearest Neighbour (K-NN) and a Linear Discriminative
Function (LDF). The preliminary results are shown.
Abstract: One of the main trouble in a steel strip manufacturing
line is the breakage of whatever weld carried out between steel coils,
that are used to produce the continuous strip to be processed. A weld
breakage results in a several hours stop of the manufacturing line. In
this process the damages caused by the breakage must be repaired.
After the reparation and in order to go on with the production it will
be necessary a restarting process of the line. For minimizing this
problem, a human operator must inspect visually and manually each
weld in order to avoid its breakage during the manufacturing process.
The work presented in this paper is based on the Bayesian decision
theory and it presents an approach to detect, on real-time, steel strip
defective welds. This approach is based on quantifying the tradeoffs
between various classification decisions using probability and the
costs that accompany such decisions.
Abstract: Intrusion Detection System is significant in network
security. It detects and identifies intrusion behavior or intrusion
attempts in a computer system by monitoring and analyzing the
network packets in real time. In the recent year, intelligent algorithms
applied in the intrusion detection system (IDS) have been an
increasing concern with the rapid growth of the network security.
IDS data deals with a huge amount of data which contains irrelevant
and redundant features causing slow training and testing process,
higher resource consumption as well as poor detection rate. Since the
amount of audit data that an IDS needs to examine is very large even
for a small network, classification by hand is impossible. Hence, the
primary objective of this review is to review the techniques prior to
classification process suit to IDS data.
Abstract: The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.
Abstract: This paper presents a new version of the SVM mixture algorithm initially proposed by Kwok for classification and regression problems. For both cases, a slight modification of the mixture model leads to a standard SVM training problem, to the existence of an exact solution and allows the direct use of well known decomposition and working set selection algorithms. Only the regression case is considered in this paper but classification has been addressed in a very similar way. This method has been successfully applied to engine pollutants emission modeling.
Abstract: Due to a high unemployment rate among local people
and a high reliance on expatriate workers, the governments in the
Gulf Co-operation Council (GCC) countries have been implementing
programmes of localisation (replacing foreign workers with GCC
nationals). These programmes have been successful in the public
sector but much less so in the private sector. However, there are now
insufficient jobs for locals in the public sector and the onus to provide
employment has fallen on the private sector. This paper is concerned
with a study, which is a work in progress (certain elements are
complete but not the whole study), investigating the effective
implementation of localisation policies in four- and five-star hotels in
the Kingdom of Saudi Arabia (KSA) and the United Arab Emirates
(UAE). The purpose of the paper is to identify the research gap, and
to present the need for the research. Further, it will explain how this
research was conducted.
Studies of localisation in the GCC countries are under-represented
in scholarly literature. Currently, the hotel sectors in KSA and UAE
play an important part in the countries’ economies. However, the
total proportion of Saudis working in the hotel sector in KSA is
slightly under 8%, and in the UAE, the hotel sector remains highly
reliant on expatriates. There is therefore a need for research on
strategies to enhance the implementation of the localisation policies
in general and in the hotel sector in particular.
Further, despite the importance of the hotel sector to their
economies, there remains a dearth of research into the
implementation of localisation policies in this sector. Indeed, as far as
the researchers are aware, there is no study examining localisation in
the hotel sector in KSA, and few in the UAE. This represents a
considerable research gap.
Regarding how the research was carried out, a multiple case study
strategy was used. The four- and five-star hotel sector in KSA is one
of the cases, while the four- and five-star hotel sector in the UAE is
the other case. Four- and five-star hotels in KSA and the UAE were
chosen as these countries have the longest established localisation
policies of all the GCC states and there are more hotels of these
classifications in these countries than in any of the other Gulf
countries. A literature review was carried out to underpin the
research. The empirical data were gathered in three phases. In order
to gain a pre-understanding of the issues pertaining to the research
context, Phase I involved eight unstructured interviews with officials
from the Saudi Commission for Tourism and Antiquities (three
interviewees); the Saudi Human Resources Development Fund (one);
the Abu Dhabi Tourism and Culture Authority (three); and the Abu
Dhabi Development Fund (one).
In Phase II, a questionnaire was administered to 24 managers and
24 employees in four- and five-star hotels in each country to obtain
their beliefs, attitudes, opinions, preferences and practices concerning
localisation.
Unstructured interviews were carried out in Phase III with six
managers in each country in order to allow them to express opinions
that may not have been explored in sufficient depth in the
questionnaire. The interviews in Phases I and III were analysed using
thematic analysis and SPSS will be used to analyse the questionnaire
data.
It is recommended that future research be undertaken on a larger
scale, with a larger sample taken from all over KSA and the UAE
rather than from only four cities (i.e., Riyadh and Jeddah in KSA and
Abu Dhabi and Sharjah in the UAE), as was the case in this research.
Abstract: Automatic currency note recognition invariably
depends on the currency note characteristics of a particular country
and the extraction of features directly affects the recognition ability.
Sri Lanka has not been involved in any kind of research or
implementation of this kind. The proposed system “SLCRec" comes
up with a solution focusing on minimizing false rejection of notes.
Sri Lankan currency notes undergo severe changes in image quality
in usage. Hence a special linear transformation function is adapted to
wipe out noise patterns from backgrounds without affecting the
notes- characteristic images and re-appear images of interest. The
transformation maps the original gray scale range into a smaller
range of 0 to 125. Applying Edge detection after the transformation
provided better robustness for noise and fair representation of edges
for new and old damaged notes. A three layer back propagation
neural network is presented with the number of edges detected in row
order of the notes and classification is accepted in four classes of
interest which are 100, 500, 1000 and 2000 rupee notes. The
experiments showed good classification results and proved that the
proposed methodology has the capability of separating classes
properly in varying image conditions.
Abstract: This paper presents an approach for early breast
cancer diagnostic by employing combination of artificial neural
networks (ANN) and multiwaveletpacket based subband image
decomposition. The microcalcifications correspond to high-frequency
components of the image spectrum, detection of microcalcifications
is achieved by decomposing the mammograms into different
frequency subbands,, reconstructing the mammograms from the
subbands containing only high frequencies. For this approach we
employed different types of multiwaveletpacket. We used the result
as an input of neural network for classification. The proposed
methodology is tested using the Nijmegen and the Mammographic
Image Analysis Society (MIAS) mammographic databases and
images collected from local hospitals. Results are presented as the
receiver operating characteristic (ROC) performance and are
quantified by the area under the ROC curve.
Abstract: It has often been said that the strength of any country
resides in the strength of its industrial sector, and Progress in
industrial society has been accomplished by the creation of new
technologies. Developments have been facilitated by the increasing
availability of advanced manufacturing technology (AMT), in
addition the implementation of advanced manufacturing technology
(AMT) requires careful planning at all levels of the organization to
ensure that the implementation will achieve the intended goals.
Justification and implementation of advanced manufacturing
technology (AMT) involves decisions that are crucial for the
practitioners regarding the survival of business in the present days of
uncertain manufacturing world. This paper assists the industrial
managers to consider all the important criteria for success AMT
implementation, when purchasing new technology. Concurrently,
this paper classifies the tangible benefits of a technology that are
evaluated by addressing both cost and time dimensions, and the
intangible benefits are evaluated by addressing technological,
strategic, social and human issues to identify and create awareness of
the essential elements in the AMT implementation process and
identify the necessary actions before implementing AMT.
Abstract: Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.
Abstract: A predictive clustering hybrid regression (pCHR)
approach was developed and evaluated using dataset from H2-
producing sucrose-based bioreactor operated for 15 months. The aim
was to model and predict the H2-production rate using information
available about envirome and metabolome of the bioprocess. Selforganizing
maps (SOM) and Sammon map were used to visualize the
dataset and to identify main metabolic patterns and clusters in
bioprocess data. Three metabolic clusters: acetate coupled with other
metabolites, butyrate only, and transition phases were detected. The
developed pCHR model combines principles of k-means clustering,
kNN classification and regression techniques. The model performed
well in modeling and predicting the H2-production rate with mean
square error values of 0.0014 and 0.0032, respectively.
Abstract: Network security attacks are the violation of
information security policy that received much attention to the
computational intelligence society in the last decades. Data mining
has become a very useful technique for detecting network intrusions
by extracting useful knowledge from large number of network data
or logs. Naïve Bayesian classifier is one of the most popular data
mining algorithm for classification, which provides an optimal way
to predict the class of an unknown example. It has been tested that
one set of probability derived from data is not good enough to have
good classification rate. In this paper, we proposed a new learning
algorithm for mining network logs to detect network intrusions
through naïve Bayesian classifier, which first clusters the network
logs into several groups based on similarity of logs, and then
calculates the prior and conditional probabilities for each group of
logs. For classifying a new log, the algorithm checks in which cluster
the log belongs and then use that cluster-s probability set to classify
the new log. We tested the performance of our proposed algorithm by
employing KDD99 benchmark network intrusion detection dataset,
and the experimental results proved that it improves detection rates
as well as reduces false positives for different types of network
intrusions.
Abstract: Classifying biomedical literature is a difficult and
challenging task, especially when a large number of biomedical
articles should be organized into a hierarchical structure. In this paper,
we present an approach for classifying a collection of biomedical text
abstracts downloaded from Medline database with the help of
ontology alignment. To accomplish our goal, we construct two types
of hierarchies, the OHSUMED disease hierarchy and the Medline
abstract disease hierarchies from the OHSUMED dataset and the
Medline abstracts, respectively. Then, we enrich the OHSUMED
disease hierarchy before adapting it to ontology alignment process for
finding probable concepts or categories. Subsequently, we compute
the cosine similarity between the vector in probable concepts (in the
“enriched" OHSUMED disease hierarchy) and the vector in Medline
abstract disease hierarchies. Finally, we assign category to the new
Medline abstracts based on the similarity score. The results obtained
from the experiments show the performance of our proposed approach
for hierarchical classification is slightly better than the performance of
the multi-class flat classification.
Abstract: This paper describes about the process of recognition and classification of brain images such as normal and abnormal based on PSO-SVM. Image Classification is becoming more important for medical diagnosis process. In medical area especially for diagnosis the abnormality of the patient is classified, which plays a great role for the doctors to diagnosis the patient according to the severeness of the diseases. In case of DICOM images it is very tough for optimal recognition and early detection of diseases. Our work focuses on recognition and classification of DICOM image based on collective approach of digital image processing. For optimal recognition and classification Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Support Vector Machine (SVM) are used. The collective approach by using PSO-SVM gives high approximation capability and much faster convergence.
Abstract: In this paper we present a system for classifying videos
by frequency spectra. Many videos contain activities with repeating
movements. Sports videos, home improvement videos, or videos
showing mechanical motion are some example areas. Motion of these
areas usually repeats with a certain main frequency and several side
frequencies. Transforming repeating motion to its frequency domain
via FFT reveals these frequencies. Average amplitudes of frequency
intervals can be seen as features of cyclic motion. Hence determining
these features can help to classify videos with repeating movements.
In this paper we explain how to compute frequency spectra for video
clips and how to use them for classifying. Our approach utilizes series
of image moments as a function. This function again is transformed
into its frequency domain.
Abstract: Monitoring the tool flank wear without affecting the
throughput is considered as the prudent method in production
technology. The examination has to be done without affecting the
machining process. In this paper we proposed a novel work that is
used to determine tool flank wear by observing the sound signals
emitted during the turning process. The work-piece material we used
here is steel and aluminum and the cutting insert was carbide
material. Two different cutting speeds were used in this work. The
feed rate and the cutting depth were constant whereas the flank wear
was a variable. The emitted sound signal of a fresh tool (0 mm flank
wear) a slightly worn tool (0.2 -0.25 mm flank wear) and a severely
worn tool (0.4mm and above flank wear) during turning process were
recorded separately using a high sensitive microphone. Analysis
using Singular Value Decomposition was done on these sound
signals to extract the feature sound components. Observation of the
results showed that an increase in tool flank wear correlates with an
increase in the values of SVD features produced out of the sound
signals for both the materials. Hence it can be concluded that wear
monitoring of tool flank during turning process using SVD features
with the Fuzzy C means classification on the emitted sound signal is
a potential and relatively simple method.
Abstract: Prediction of fault-prone modules provides one way to
support software quality engineering. Clustering is used to determine
the intrinsic grouping in a set of unlabeled data. Among various
clustering techniques available in literature K-Means clustering
approach is most widely being used. This paper introduces K-Means
based Clustering approach for software finding the fault proneness of
the Object-Oriented systems. The contribution of this paper is that it
has used Metric values of JEdit open source software for generation
of the rules for the categorization of software modules in the
categories of Faulty and non faulty modules and thereafter
empirically validation is performed. The results are measured in
terms of accuracy of prediction, probability of Detection and
Probability of False Alarms.
Abstract: Field Association (FA) terms are a limited set of discriminating terms that give us the knowledge to identify document fields which are effective in document classification, similar file retrieval and passage retrieval. But the problem lies in the lack of an effective method to extract automatically relevant Arabic FA Terms to build a comprehensive dictionary. Moreover, all previous studies are based on FA terms in English and Japanese, and the extension of FA terms to other language such Arabic could be definitely strengthen further researches. This paper presents a new method to extract, Arabic FA Terms from domain-specific corpora using part-of-speech (POS) pattern rules and corpora comparison. Experimental evaluation is carried out for 14 different fields using 251 MB of domain-specific corpora obtained from Arabic Wikipedia dumps and Alhyah news selected average of 2,825 FA Terms (single and compound) per field. From the experimental results, recall and precision are 84% and 79% respectively. Therefore, this method selects higher number of relevant Arabic FA Terms at high precision and recall.
Abstract: The evaluation of the question answering system is a major research area that needs much attention. Before the rise of domain-oriented question answering systems based on natural language understanding and reasoning, evaluation is never a problem as information retrieval-based metrics are readily available for use. However, when question answering systems began to be more domains specific, evaluation becomes a real issue. This is especially true when understanding and reasoning is required to cater for a wider variety of questions and at the same time achieve higher quality responses The research in this paper discusses the inappropriateness of the existing measure for response quality evaluation and in a later part, the call for new standard measures and the related considerations are brought forward. As a short-term solution for evaluating response quality of heterogeneous systems, and to demonstrate the challenges in evaluating systems of different nature, this research presents a black-box approach using observation, classification scheme and a scoring mechanism to assess and rank three example systems (i.e. AnswerBus, START and NaLURI).
Abstract: In this paper, we present an innovative scheme of
blindly extracting message bits from an image distorted by an attack.
Support Vector Machine (SVM) is used to nonlinearly classify the
bits of the embedded message. Traditionally, a hard decoder is used
with the assumption that the underlying modeling of the Discrete
Cosine Transform (DCT) coefficients does not appreciably change.
In case of an attack, the distribution of the image coefficients is
heavily altered. The distribution of the sufficient statistics at the
receiving end corresponding to the antipodal signals overlap and a
simple hard decoder fails to classify them properly. We are
considering message retrieval of antipodal signal as a binary
classification problem. Machine learning techniques like SVM is
used to retrieve the message, when certain specific class of attacks is
most probable. In order to validate SVM based decoding scheme, we
have taken Gaussian noise as a test case. We generate a data set using
125 images and 25 different keys. Polynomial kernel of SVM has
achieved 100 percent accuracy on test data.