Abstract: This paper gives an overview of how an OWL
ontology has been created to represent template knowledge models
defined in CML that are provided by CommonKADS.
CommonKADS is a mature knowledge engineering methodology
which proposes the use of template knowledge model for knowledge
modelling. The aim of developing this ontology is to present the
template knowledge model in a knowledge representation language
that can be easily understood and shared in the knowledge
engineering community. Hence OWL is used as it has become a
standard for ontology and also it already has user friendly tools for
viewing and editing.
Abstract: This paper proposes a framework for product
development including hardware and software components. It
provides separation of hardware dependent software, modifications of
current product development process, and integration of software
modules with existing product configuration models and assembly
product structures. In order to decide the dependent software, the
framework considers product configuration modules and engineering
changes of associated software and hardware components. In order to
support efficient integration of the two different hardware and
software development, a modified product development process is
proposed. The process integrates the dependent software development
into product development through the interchanges of specific product
information. By using existing product data models in Product Data
Management (PDM), the framework represents software as modules
for product configurations and software parts for product structure.
The framework is applied to development of a robot system in order to
show its effectiveness.
Abstract: The value of overall oxygen transfer Coefficient
(KLa), which is the best measure of oxygen transfer in water through
aeration, is obtained by a simple approach, which sufficiently
explains the utility of the method to eliminate the discrepancies due
to inaccurate assumption of saturation dissolved oxygen
concentration. The rate of oxygen transfer depends on number of
factors like intensity of turbulence, which in turns depends on the
speed of rotation, size, and number of blades, diameter and
immersion depth of the rotor, and size and shape of aeration tank, as
well as on physical, chemical, and biological characteristic of water.
An attempt is made in this paper to correlate the overall oxygen
transfer Coefficient (KLa), as an independent parameter with other
influencing parameters mentioned above. It has been estimated that
the simulation equation developed predicts the values of KLa and
power with an average standard error of estimation of 0.0164 and
7.66 respectively and with R2 values of 0.979 and 0.989 respectively,
when compared with experimentally determined values. The
comparison of this model is done with the model generated using
Computational fluid dynamics (CFD) and both the models were
found to be in good agreement with each other.
Abstract: The Pulsed Compression Reactor promises to be a
compact, economical and energy efficient alternative to conventional
chemical reactors.
In this article, the production of synthesis gas using the Pulsed
Compression Reactor is investigated. This is done experimentally as
well as with simulations. The experiments are done by means of a
single shot reactor, which replicates a representative, single
reciprocation of the Pulsed Compression Reactor with great control
over the reactant composition, reactor temperature and pressure and
temperature history. Simulations are done with a relatively simple
method, which uses different models for the chemistry and
thermodynamic properties of the species in the reactor. Simulation
results show very good agreement with the experimental data, and
give great insight into the reaction processes that occur within the
cycle.
Abstract: Throughput is an important measure of performance of production system. Analyzing and modeling of production throughput is complex in today-s dynamic production systems due to uncertainties of production system. The main reasons are that uncertainties are materialized when the production line faces changes in setup time, machinery break down, lead time of manufacturing, and scraps. Besides, demand changes are fluctuating from time to time for each product type. These uncertainties affect the production performance. This paper proposes Bayesian inference for throughput modeling under five production uncertainties. Bayesian model utilized prior distributions related to previous information about the uncertainties where likelihood distributions are associated to the observed data. Gibbs sampling algorithm as the robust procedure of Monte Carlo Markov chain was employed for sampling unknown parameters and estimating the posterior mean of uncertainties. The Bayesian model was validated with respect to convergence and efficiency of its outputs. The results presented that the proposed Bayesian models were capable to predict the production throughput with accuracy of 98.3%.
Abstract: Smoothing or filtering of data is first preprocessing step
for noise suppression in many applications involving data analysis.
Moving average is the most popular method of smoothing the data,
generalization of this led to the development of Savitzky-Golay filter.
Many window smoothing methods were developed by convolving
the data with different window functions for different applications;
most widely used window functions are Gaussian or Kaiser. Function
approximation of the data by polynomial regression or Fourier
expansion or wavelet expansion also gives a smoothed data. Wavelets
also smooth the data to great extent by thresholding the wavelet
coefficients. Almost all smoothing methods destroys the peaks and
flatten them when the support of the window is increased. In certain
applications it is desirable to retain peaks while smoothing the data
as much as possible. In this paper we present a methodology called
as peak-wise smoothing that will smooth the data to any desired level
without losing the major peak features.
Abstract: The model-based approach to user interface design
relies on developing separate models capturing various aspects about
users, tasks, application domain, presentation and dialog structures.
This paper presents a task modeling approach for user interface
design and aims at exploring mappings between task, domain and
presentation models. The basic idea of our approach is to identify
typical configurations in task and domain models and to investigate
how they relate each other. A special emphasis is put on applicationspecific
functions and mappings between domain objects and
operational task structures. In this respect, we will address two
layers in task decomposition: a functional (planning) layer and an
operational layer.
Abstract: Prediction of viscosity of natural gas is an important parameter in the energy industries such as natural gas storage and transportation. In this study viscosity of different compositions of natural gas is modeled by using an artificial neural network (ANN) based on back-propagation method. A reliable database including more than 3841 experimental data of viscosity for testing and training of ANN is used. The designed neural network can predict the natural gas viscosity using pseudo-reduced pressure and pseudo-reduced temperature with AARD% of 0.221. The accuracy of designed ANN has been compared to other published empirical models. The comparison indicates that the proposed method can provide accurate results.
Abstract: This research presented in this paper is an on-going
project of an application of neural network and fuzzy models to
evaluate the sociological factors which affect the educational
performance of the students in Sri Lanka. One of its major goals is to
prepare the grounds to device a counseling tool which helps these
students for a better performance at their examinations, especially at
their G.C.E O/L (General Certificate of Education-Ordinary Level)
examination. Closely related sociological factors are collected as raw
data and the noise of these data are filtered through the fuzzy
interface and the supervised neural network is being utilized to
recognize the performance patterns against the chosen social factors.
Abstract: In this paper discrete choice models, Logit and Probit
are examined in order to predict the economic recession or expansion
periods in USA. Additionally we propose an adaptive neuro-fuzzy
inference system with triangular membership function. We examine
the in-sample period 1947-2005 and we test the models in the out-of
sample period 2006-2009. The forecasting results indicate that the
Adaptive Neuro-fuzzy Inference System (ANFIS) model outperforms
significant the Logit and Probit models in the out-of sample period.
This indicates that neuro-fuzzy model provides a better and more
reliable signal on whether or not a financial crisis will take place.
Abstract: A system for market identification (SMI) is presented.
The resulting representations are multivariable dynamic demand
models. The market specifics are analyzed. Appropriate models and
identification techniques are chosen. Multivariate static and dynamic
models are used to represent the market behavior. The steps of the
first stage of SMI, named data preprocessing, are mentioned. Next,
the second stage, which is the model estimation, is considered in more
details. Stepwise linear regression (SWR) is used to determine the
significant cross-effects and the orders of the model polynomials. The
estimates of the model parameters are obtained by a numerically stable
estimator. Real market data is used to analyze SMI performance.
The main conclusion is related to the applicability of multivariate
dynamic models for representation of market systems.
Abstract: Discrete choice model is the most used methodology for studying traveler-s mode choice and demand. However, to calibrate the discrete choice model needs to have plenty of questionnaire survey. In this study, an aggregative model is proposed. The historical data of passenger volumes for high speed rail and domestic civil aviation are employed to calibrate and validate the model. In this study, different models are compared so as to propose the best one. From the results, systematic equations forecast better than single equation do. Models with the external variable, which is oil price, are better than models based on closed system assumption.
Abstract: The paper gives the pilot results of the project that is
oriented on the use of data mining techniques and knowledge
discoveries from production systems through them. They have been
used in the management of these systems. The simulation models of
manufacturing systems have been developed to obtain the necessary
data about production. The authors have developed the way of
storing data obtained from the simulation models in the data
warehouse. Data mining model has been created by using specific
methods and selected techniques for defined problems of production
system management. The new knowledge has been applied to
production management system. Gained knowledge has been tested
on simulation models of the production system. An important benefit
of the project has been proposal of the new methodology. This
methodology is focused on data mining from the databases that store
operational data about the production process.
Abstract: A mammography image is composed of low contrast area where the breast tissues and the breast abnormalities such as microcalcification can hardly be differentiated by the medical practitioner. This paper presents the application of active contour models (Snakes) for the segmentation of microcalcification in mammography images. Comparison on the microcalcifiation areas segmented by the Balloon Snake, Gradient Vector Flow (GVF) Snake, and Distance Snake is done against the true value of the microcalcification area. The true area value is the average microcalcification area in the original mammography image traced by the expert radiologists. From fifty images tested, the result obtained shows that the accuracy of the Balloon Snake, GVF Snake, and Distance Snake in segmenting boundaries of microcalcification are 96.01%, 95.74%, and 95.70% accuracy respectively. This implies that the Balloon Snake is a better segmentation method to locate the exact boundary of a microcalcification region.
Abstract: In this paper, the effect of width and height of the
model on the earthquake response in the finite element method is
discussed. For this purpose an earth dam as a soil structure under
earthquake has been considered. Various dam-foundation models are
analyzed by Plaxis, a finite element package for solving geotechnical
problems. The results indicate considerable differences in the seismic
responses.
Abstract: Sociological models (e.g., social network analysis, small-group dynamic and gang models) have historically been used to predict the behavior of terrorist groups. However, they may not be the most appropriate method for understanding the behavior of terrorist organizations because the models were not initially intended to incorporate violent behavior of its subjects. Rather, models that incorporate life and death competition between subjects, i.e., models utilized by scientists to examine the behavior of wildlife populations, may provide a more accurate analysis. This paper suggests the use of biological models to attain a more robust method for understanding the behavior of terrorist organizations as compared to traditional methods. This study also describes how a biological population model incorporating predator-prey behavior factors can predict terrorist organizational recruitment behavior for the purpose of understanding the factors that govern the growth and decline of terrorist organizations. The Lotka-Volterra, a biological model that is based on a predator-prey relationship, is applied to a highly suggestive case study, that of the Irish Republican Army. This case study illuminates how a biological model can be utilized to understand the actions of a terrorist organization.
Abstract: The utilize of renewable energy sources becomes
more crucial and fascinatingly, wider application of renewable
energy devices at domestic, commercial and industrial levels is not
only affect to stronger awareness but also significantly installed
capacities. Moreover, biomass principally is in form of woods and
converts to be energy for using by humans for a long time.
Gasification is a process of conversion of solid carbonaceous fuel
into combustible gas by partial combustion. Many gasified models
have various operating conditions because the parameters kept in
each model are differentiated. This study applied the experimental
data including three inputs variables including biomass consumption;
temperature at combustion zone and ash discharge rate and gas flow
rate as only one output variable. In this paper, response surface
methods were applied for identification of the gasified system
equation suitable for experimental data. The result showed that linear
model gave superlative results.
Abstract: This paper proposes an improvement method of classification
efficiency in a classification model. The model is used
in a risk search system and extracts specific labels from articles
posted at bulletin board sites. The system can analyze the important
discussions composed of the articles. The improvement method
introduces ensemble learning methods that use multiple classification
models. Also, it introduces expressions related to the specific labels
into generation of word vectors. The paper applies the improvement
method to articles collected from three bulletin board sites selected
by users and verifies the effectiveness of the improvement method.
Abstract: Gene, principal unit of inheritance, is an ordered
sequence of nucleotides. The genes of eukaryotic organisms include
alternating segments of exons and introns. The region of
Deoxyribonucleic acid (DNA) within a gene containing instructions
for coding a protein is called exon. On the other hand, non-coding
regions called introns are another part of DNA that regulates gene
expression by removing from the messenger Ribonucleic acid (RNA)
in a splicing process. This paper proposes to determine splice
junctions that are exon-intron boundaries by analyzing DNA
sequences. A splice junction can be either exon-intron (EI) or intron
exon (IE). Because of the popularity and compatibility of the
artificial neural network (ANN) in genetic fields; various ANN
models are applied in this research. Multi-layer Perceptron (MLP),
Radial Basis Function (RBF) and Generalized Regression Neural
Networks (GRNN) are used to analyze and detect the splice junctions
of gene sequences. 10-fold cross validation is used to demonstrate
the accuracy of networks. The real performances of these networks
are found by applying Receiver Operating Characteristic (ROC)
analysis.
Abstract: The purpose of this paper is to present two different
approaches of financial distress pre-warning models appropriate for
risk supervisors, investors and policy makers. We examine a sample
of the financial institutions and electronic companies of Taiwan
Security Exchange (TSE) market from 2002 through 2008. We
present a binary logistic regression with paned data analysis. With
the pooled binary logistic regression we build a model including
more variables in the regression than with random effects, while the
in-sample and out-sample forecasting performance is higher in
random effects estimation than in pooled regression. On the other
hand we estimate an Adaptive Neuro-Fuzzy Inference System
(ANFIS) with Gaussian and Generalized Bell (Gbell) functions and
we find that ANFIS outperforms significant Logit regressions in both
in-sample and out-of-sample periods, indicating that ANFIS is a
more appropriate tool for financial risk managers and for the
economic policy makers in central banks and national statistical
services.