Abstract: The new idea of this research is application of a new fault detection and isolation (FDI) technique for supervision of sensor networks in transportation system. In measurement systems, it is necessary to detect all types of faults and failures, based on predefined algorithm. Last improvements in artificial neural network studies (ANN) led to using them for some FDI purposes. In this paper, application of new probabilistic neural network features for data approximation and data classification are considered for plausibility check in temperature measurement. For this purpose, two-phase FDI mechanism was considered for residual generation and evaluation.
Abstract: Imperfect knowledge cannot be avoided all the time. Imperfections may have several forms; uncertainties, imprecision and incompleteness. When we look to classification of methods for the management of imperfect knowledge we see fuzzy set-based techniques. The choice of a method to process data is linked to the choice of knowledge representation, which can be numerical, symbolic, logical or semantic and it depends on the nature of the problem to be solved for example decision support, which will be mentioned in our study. Fuzzy Logic is used for its ability to manage imprecise knowledge, but it can take advantage of the ability of neural networks to learn coefficients or functions. Such an association of methods is typical of so-called soft computing. In this study a new method was used for the management of imprecision for collected knowledge which related to economic analysis of construction industry in Turkey. Because of sudden changes occurring in economic factors decrease competition strength of construction companies. The better evaluation of these changes in economical factors in view of construction industry will made positive influence on company-s decisions which are dealing construction.
Abstract: This paper describes a study of geometrically
nonlinear free vibration of thin circular functionally graded (CFGP)
plates resting on Winkler elastic foundations. The material properties
of the functionally graded composites examined here are assumed to
be graded smoothly and continuously through the direction of the
plate thickness according to a power law and are estimated using the
rule of mixture. The theoretical model is based on the classical Plate
theory and the Von-Kármán geometrical nonlinearity assumptions.
An homogenization procedure (HP) is developed to reduce the
problem considered here to that of isotropic homogeneous circular
plates resting on Winkler foundation. Hamilton-s principle is applied
and a multimode approach is derived to calculate the fundamental
nonlinear frequency parameters which are found to be in a good
agreement with the published results. On the other hand, the
influence of the foundation parameters on the nonlinear fundamental
frequency has also been analysed.
Abstract: Emerging Bio-engineering fields such as Brain
Computer Interfaces, neuroprothesis devices and modeling and
simulation of neural networks have led to increased research activity
in algorithms for the detection, isolation and classification of Action
Potentials (AP) from noisy data trains. Current techniques in the field
of 'unsupervised no-prior knowledge' biosignal processing include
energy operators, wavelet detection and adaptive thresholding. These
tend to bias towards larger AP waveforms, AP may be missed due to
deviations in spike shape and frequency and correlated noise
spectrums can cause false detection. Also, such algorithms tend to
suffer from large computational expense.
A new signal detection technique based upon the ideas of phasespace
diagrams and trajectories is proposed based upon the use of a
delayed copy of the AP to highlight discontinuities relative to
background noise. This idea has been used to create algorithms that
are computationally inexpensive and address the above problems.
Distinct AP have been picked out and manually classified from
real physiological data recorded from a cockroach. To facilitate
testing of the new technique, an Auto Regressive Moving Average
(ARMA) noise model has been constructed bases upon background
noise of the recordings. Along with the AP classification means this
model enables generation of realistic neuronal data sets at arbitrary
signal to noise ratio (SNR).
Abstract: Modern manufacturing facilities are large scale,
highly complex, and operate with large number of variables under
closed loop control. Early and accurate fault detection and diagnosis
for these plants can minimise down time, increase the safety of plant
operations, and reduce manufacturing costs. Fault detection and
isolation is more complex particularly in the case of the faulty analog
control systems. Analog control systems are not equipped with
monitoring function where the process parameters are continually
visualised. In this situation, It is very difficult to find the relationship
between the fault importance and its consequences on the product
failure. We consider in this paper an approach to fault detection and
analysis of its effect on the production quality using an adaptive
centring and scaling in the pickling process in cold rolling. The fault
appeared on one of the power unit driving a rotary machine, this
machine can not track a reference speed given by another machine.
The length of metal loop is then in continuous oscillation, this affects
the product quality. Using a computerised data acquisition system,
the main machine parameters have been monitored. The fault has
been detected and isolated on basis of analysis of monitored data.
Normal and faulty situation have been obtained by an artificial neural
network (ANN) model which is implemented to simulate the normal
and faulty status of rotary machine. Correlation between the product
quality defined by an index and the residual is used to quality
classification.
Abstract: This study is about an application of King Bhumibol
Adulyadej’s “Learn Wisely” (LW) concept in instructional design
and management process at the Faculty of Education, Suan Sunahdha
Rajabhat University. The concept suggests four strategies for true
learning. Related literature and significant LW methods in teaching
and learning are also reviewed and then applied in designing a
pedagogy learning module. The design has been implemented in
three classrooms with a total of 115 sophomore student teachers.
After one consecutive semester of managing and adjusting the
process by instructors and experts using collected data from minutes,
assessment of learning management, satisfaction and learning
achievement of the students, it is found that the effective SSRU
model of LW instructional method comprises of five steps.
Abstract: Segmentation is an important step in medical image
analysis and classification for radiological evaluation or computer
aided diagnosis. The CAD (Computer Aided Diagnosis ) of lung CT
generally first segment the area of interest (lung) and then analyze
the separately obtained area for nodule detection in order to
diagnosis the disease. For normal lung, segmentation can be
performed by making use of excellent contrast between air and
surrounding tissues. However this approach fails when lung is
affected by high density pathology. Dense pathologies are present in
approximately a fifth of clinical scans, and for computer analysis
such as detection and quantification of abnormal areas it is vital that
the entire and perfectly lung part of the image is provided and no
part, as present in the original image be eradicated. In this paper we
have proposed a lung segmentation technique which accurately
segment the lung parenchyma from lung CT Scan images. The
algorithm was tested against the 25 datasets of different patients
received from Ackron Univeristy, USA and AGA Khan Medical
University, Karachi, Pakistan.
Abstract: Monitoring lightning electromagnetic pulses (sferics) and other terrestrial as well as extraterrestrial transient radiation signals is of considerable interest for practical and theoretical purposes in astro- and geophysics as well as meteorology. Managing a continuous flow of data, automation of the analysis and classification process is important. Features based on a combination of wavelet and statistical methods proved efficient for this task and serve as input into a radial basis function network that is trained to discriminate transient shapes from pulse like to wave like. We concentrate on signals in the Very Low Frequency (VLF, 3 -30 kHz) range in this paper, but the developed methods are independent of this specific choice.
Abstract: Morgan-s refinement calculus (MRC) is one of the
well-known methods allowing the formality presented in the program
specification to be continued all the way to code. On the other hand,
Object-Z (OZ) is an extension of Z adding support for classes and
objects. There are a number of methods for obtaining code from OZ
specifications that can be categorized into refinement and animation
methods. As far as we know, only one refinement method exists
which refines OZ specifications into code. However, this method
does not have fine-grained refinement rules and thus cannot be
automated. On the other hand, existing animation methods do not
present mapping rules formally and do not support the mapping of
several important constructs of OZ, such as all cases of operation
expressions and most of constructs in global paragraph. In this paper,
with the aim of providing an automatic path from OZ specifications
to code, we propose an approach to map OZ specifications into their
counterparts in MRC in order to use fine-grained refinement rules of
MRC. In this way, having counterparts of our specifications in MRC,
we can refine them into code automatically using MRC tools such as
RED. Other advantages of our work pertain to proposing mapping
rules formally, supporting the mapping of all important constructs of
Object-Z, and considering dynamic instantiation of objects while OZ
itself does not cover this facility.
Abstract: Automatic reading of handwritten cheque is a computationally
complex process and it plays an important role in financial
risk management. Machine vision and learning provide a viable
solution to this problem. Research effort has mostly been focused
on recognizing diverse pitches of cheques and demand drafts with an
identical outline. However most of these methods employ templatematching
to localize the pitches and such schemes could potentially
fail when applied to different types of outline maintained by the
bank. In this paper, the so-called outline problem is resolved by
a cheque information tree (CIT), which generalizes the localizing
method to extract active-region-of-entities. In addition, the weight
based density plot (WBDP) is performed to isolate text entities and
read complete pitches. Recognition is based on texture features using
neural classifiers. Legal amount is subsequently recognized by both
texture and perceptual features. A post-processing phase is invoked
to detect the incorrect readings by Type-2 grammar using the Turing
machine. The performance of the proposed system was evaluated
using cheque and demand drafts of 22 different banks. The test data
consists of a collection of 1540 leafs obtained from 10 different
account holders from each bank. Results show that this approach
can easily be deployed without significant design amendments.
Abstract: Dengue virus is transmitted from person to person
through the biting of infected Aedes Aegypti mosquitoes. DEN-1,
DEN-2, DEN-3 and DEN-4 are four serotypes of this virus. Infection
with one of these four serotypes apparently produces permanent
immunity to it, but only temporary cross immunity to the others. The
length of time during incubation of dengue virus in human and
mosquito are considered in this study. The dengue patients are
classified into infected and infectious classes. The infectious human
can transmit dengue virus to susceptible mosquitoes but infected
human can not. The transmission model of this disease is formulated.
The human population is divided into susceptible, infected, infectious
and recovered classes. The mosquito population is separated into
susceptible, infected and infectious classes. Only infectious
mosquitoes can transmit dengue virus to the susceptible human. We
analyze this model by using dynamical analysis method. The
threshold condition is discussed to reduce the outbreak of this
disease.
Abstract: In this paper, a novel method for recognition of musical
instruments in a polyphonic music is presented by using an
embedded hidden Markov model (EHMM). EHMM is a doubly
embedded HMM structure where each state of the external HMM
is an independent HMM. The classification is accomplished for
two different internal HMM structures where GMMs are used as
likelihood estimators for the internal HMMs. The results are compared
to those achieved by an artificial neural network with two
hidden layers. Appropriate classification accuracies were achieved
both for solo instrument performance and instrument combinations
which demonstrates that the new approach outperforms the similar
classification methods by means of the dynamic of the signal.
Abstract: We have defined two suites of metrics, which cover
static and dynamic aspects of component assembly. The static
metrics measure complexity and criticality of component assembly,
wherein complexity is measured using Component Packing Density
and Component Interaction Density metrics. Further, four criticality
conditions namely, Link, Bridge, Inheritance and Size criticalities
have been identified and quantified. The complexity and criticality
metrics are combined to form a Triangular Metric, which can be used
to classify the type and nature of applications. Dynamic metrics are
collected during the runtime of a complete application. Dynamic
metrics are useful to identify super-component and to evaluate the
degree of utilisation of various components. In this paper both static
and dynamic metrics are evaluated using Weyuker-s set of properties.
The result shows that the metrics provide a valid means to measure
issues in component assembly. We relate our metrics suite with
McCall-s Quality Model and illustrate their impact on product
quality and to the management of component-based product
development.
Abstract: Faced with social and health system capacity
constraints and rising and changing demand for welfare services,
governments and welfare providers are increasingly relying on
innovation to help support and enhance services. However, the
evidence reported by several studies indicates that the realization of
that potential is not an easy task. Innovations can be deemed
inherently complex to implement and operate, because many of them
involve a combination of technological and organizational renewal
within an environment featuring a diversity of stakeholders. Many
public welfare service innovations are markedly systemic in their
nature, which means that they emerge from, and must address, the
complex interplay between political, administrative, technological,
institutional and legal issues. This paper suggests that stakeholders
dealing with systemic innovation in welfare services must deal with
ambiguous and incomplete information in circumstances of
uncertainty. Employing a literature review methodology and case
study, this paper identifies, categorizes and discusses different
aspects of the uncertainty of systemic innovation in public welfare
services, and argues that uncertainty can be classified into eight
categories: technological uncertainty, market uncertainty,
regulatory/institutional uncertainty, social/political uncertainty,
acceptance/legitimacy uncertainty, managerial uncertainty, timing
uncertainty and consequence uncertainty.
Abstract: Computer languages are usually lumped together
into broad -paradigms-, leaving us in want of a finer classification
of kinds of language. Theories distinguishing between -genuine
differences- in language has been called for, and we propose that
such differences can be observed through a notion of expressive mode.
We outline this concept, propose how it could be operationalized and
indicate a possible context for the development of a corresponding
theory. Finally we consider a possible application in connection
with evaluation of language revision. We illustrate this with a case,
investigating possible revisions of the relational algebra in order to
overcome weaknesses of the division operator in connection with
universal queries.
Abstract: As the Internet continues to grow at a rapid pace as
the primary medium for communications and commerce and as
telecommunication networks and systems continue to expand their
global reach, digital information has become the most popular and
important information resource and our dependence upon the
underlying cyber infrastructure has been increasing significantly.
Unfortunately, as our dependency has grown, so has the threat to the
cyber infrastructure from spammers, attackers and criminal
enterprises. In this paper, we propose a new machine learning based
network intrusion detection framework for cyber security. The
detection process of the framework consists of two stages: model
construction and intrusion detection. In the model construction stage,
a semi-supervised machine learning algorithm is applied to a
collected set of network audit data to generate a profile of normal
network behavior and in the intrusion detection stage, input network
events are analyzed and compared with the patterns gathered in the
profile, and some of them are then flagged as anomalies should these
events are sufficiently far from the expected normal behavior. The
proposed framework is particularly applicable to the situations where
there is only a small amount of labeled network training data
available, which is very typical in real world network environments.
Abstract: Feature selection study is gaining importance due to its contribution to save classification cost in terms of time and computation load. In search of essential features, one of the methods to search the features is via the decision tree. Decision tree act as an intermediate feature space inducer in order to choose essential features. In decision tree-based feature selection, some studies used decision tree as a feature ranker with a direct threshold measure, while others remain the decision tree but utilized pruning condition that act as a threshold mechanism to choose features. This paper proposed threshold measure using Manhattan Hierarchical Cluster distance to be utilized in feature ranking in order to choose relevant features as part of the feature selection process. The result is promising, and this method can be improved in the future by including test cases of a higher number of attributes.
Abstract: With the enormous growth on the web, users get easily
lost in the rich hyper structure. Thus developing user friendly and
automated tools for providing relevant information without any
redundant links to the users to cater to their needs is the primary task
for the website owners. Most of the existing web mining algorithms
have concentrated on finding frequent patterns while neglecting the
less frequent one that are likely to contain the outlying data such as
noise, irrelevant and redundant data. This paper proposes new
algorithm for mining the web content by detecting the redundant
links from the web documents using set theoretical(classical
mathematics) such as subset, union, intersection etc,. Then the
redundant links is removed from the original web content to get the
required information by the user..
Abstract: The weight constrained shortest path problem
(WCSPP) is one of most several known basic problems in
combinatorial optimization. Because of its importance in many areas
of applications such as computer science, engineering and operations
research, many researchers have extensively studied the WCSPP.
This paper mainly concentrates on the reduction of total search space
for finding WCSP using some existing Genetic Algorithm (GA). For
this purpose, some controlled schemes of genetic operators are
adopted on list chromosome representation. This approach gives a
near optimum solution with smaller elapsed generation than classical
GA technique. From further analysis on the matter, a new
generalized schema theorem is also developed from the philosophy
of Holland-s theorem.
Abstract: Classification of video sequences based on their contents is a vital process for adaptation techniques. It helps decide which adaptation technique best fits the resource reduction requested by the client. In this paper we used the principal feature analysis algorithm to select a reduced subset of video features. The main idea is to select only one feature from each class based on the similarities between the features within that class. Our results showed that using this feature reduction technique the source video features can be completely omitted from future classification of video sequences.