Abstract: Modeling of complex dynamic systems, which are
very complicated to establish mathematical models, requires new and
modern methodologies that will exploit the existing expert
knowledge, human experience and historical data. Fuzzy cognitive
maps are very suitable, simple, and powerful tools for simulation and
analysis of these kinds of dynamic systems. However, human experts
are subjective and can handle only relatively simple fuzzy cognitive
maps; therefore, there is a need of developing new approaches for an
automated generation of fuzzy cognitive maps using historical data.
In this study, a new learning algorithm, which is called Big Bang-Big
Crunch, is proposed for the first time in literature for an automated
generation of fuzzy cognitive maps from data. Two real-world
examples; namely a process control system and radiation therapy
process, and one synthetic model are used to emphasize the
effectiveness and usefulness of the proposed methodology.
Abstract: The goal of speech parameterization is to extract the relevant information about what is being spoken from the audio signal. In speech recognition systems Mel-Frequency Cepstral Coefficients (MFCC) and Relative Spectral Mel-Frequency Cepstral Coefficients (RASTA-MFCC) are the two main techniques used. It will be shown in this paper that it presents some modifications to the original MFCC method. In our work the effectiveness of proposed changes to MFCC called Modified Function Cepstral Coefficients (MODFCC) were tested and compared against the original MFCC and RASTA-MFCC features. The prosodic features such as jitter and shimmer are added to baseline spectral features. The above-mentioned techniques were tested with impulsive signals under various noisy conditions within AURORA databases.
Abstract: Wireless Sensor Network is Multi hop Self-configuring
Wireless Network consisting of sensor nodes. The deployment of
wireless sensor networks in many application areas, e.g., aggregation
services, requires self-organization of the network nodes into clusters.
Efficient way to enhance the lifetime of the system is to partition the
network into distinct clusters with a high energy node as cluster head.
The different methods of node clustering techniques have appeared in
the literature, and roughly fall into two families; those based on the
construction of a dominating set and those which are based solely on
energy considerations. Energy optimized cluster formation for a set
of randomly scattered wireless sensors is presented. Sensors within a
cluster are expected to be communicating with cluster head only. The
energy constraint and limited computing resources of the sensor nodes
present the major challenges in gathering the data. In this paper we
propose a framework to study how partially correlated data affect the
performance of clustering algorithms. The total energy consumption
and network lifetime can be analyzed by combining random geometry
techniques and rate distortion theory. We also present the relation
between compression distortion and data correlation.
Abstract: The aim of the present study was to analyze and
distinguish playing pattern between winning and losing field hockey
team in Delhi 2012 tournament. The playing pattern is focus to the D
penetration (right, center, left.) and to distinguish D penetration
linking to end shot made from it. The data was recorded and analyzed
using Sportscode elite computer software. 12 matches were analyzed
from the tournament. Two groups of performance indicators are used
to analyze, that is D penetration right, center, and left. The type of
shot chosen is hit, push, flick, drag, drag flick, deflect sweep, deflect
push, scoop, sweep, and reverse hit. This is to distinguish the pattern
of play between winning and losing, only 2 performance indicator
showed high significant differences from right (Z=-2.87, p=.004,
p
Abstract: Text similarity measurement is a fundamental issue in
many textual applications such as document clustering, classification,
summarization and question answering. However, prevailing approaches
based on Vector Space Model (VSM) more or less suffer
from the limitation of Bag of Words (BOW), which ignores the semantic
relationship among words. Enriching document representation
with background knowledge from Wikipedia is proven to be an effective
way to solve this problem, but most existing methods still
cannot avoid similar flaws of BOW in a new vector space. In this
paper, we propose a novel text similarity measurement which goes
beyond VSM and can find semantic affinity between documents.
Specifically, it is a unified graph model that exploits Wikipedia as
background knowledge and synthesizes both document representation
and similarity computation. The experimental results on two different
datasets show that our approach significantly improves VSM-based
methods in both text clustering and classification.
Abstract: Customer-supplier collaboration enables firms to
achieve greater success than acting independently. Nevertheless, not
many firms have fully utilized the potential of collaboration. This
paper presents organizational and human related success factors for
collaboration in manufacturing supply chains in casting industry. Our
research approach was a case study including multiple cases. Data
was gathered by interviews and group discussions in two different
research projects. In the first research project we studied seven firms
and in the second five. It was found that the success factors are
interrelated, in other words, organizational and human factors
together enable success but not any of them alone. Some of the found
success factors are a culture of following agreements, and a speed of
informing the partner about changes affecting to the product or the
delivery chain.
Abstract: Breast cancer is one of the most frequent occurring cancers in women throughout the world including U.K. The grading of this cancer plays a vital role in the prognosis of the disease. In this paper we present an overview of the use of advanced computational method of fuzzy inference system as a tool for the automation of breast cancer grading. A new spectral data set obtained from Fourier Transform Infrared Spectroscopy (FTIR) of cancer patients has been used for this study. The future work outlines the potential areas of fuzzy systems that can be used for the automation of breast cancer grading.
Abstract: One of the biggest problems of SMEs is their tendencies to financial distress because of insufficient finance background. In this study, an Early Warning System (EWS) model based on data mining for financial risk detection is presented. CHAID algorithm has been used for development of the EWS. Developed EWS can be served like a tailor made financial advisor in decision making process of the firms with its automated nature to the ones who have inadequate financial background. Besides, an application of the model implemented which covered 7,853 SMEs based on Turkish Central Bank (TCB) 2007 data. By using EWS model, 31 risk profiles, 15 risk indicators, 2 early warning signals, and 4 financial road maps has been determined for financial risk mitigation.
Abstract: Cryo-electron microscopy (CEM) in combination with
single particle analysis (SPA) is a widely used technique for
elucidating structural details of macromolecular assemblies at closeto-
atomic resolutions. However, development of automated software
for SPA processing is still vital since thousands to millions of
individual particle images need to be processed. Here, we present our
workflow for automated particle picking. Our approach integrates
peak shape analysis to the classical correlation and an iterative
approach to separate macromolecules and background by
classification. This particle selection workflow furthermore provides
a robust means for SPA with little user interaction. Processing
simulated and experimental data assesses performance of the
presented tools.
Abstract: The past decade has seen enormous growth in the amount of software produced. However, given the ever increasing complexity of the software being developed and the concomitant rise in the typical project size, managers are becoming increasingly aware of the importance of issues that influence the productivity levels of the project teams involved. By analyzing the latest release of ISBSG data repository, we report on the factors found to significantly influence the productivity among which average team size and language type are the two most essential ones. Building on this we present an original model for evaluating the potential productivity during the project planning stage.
Abstract: IPsec has now become a standard information security
technology throughout the Internet society. It provides a well-defined
architecture that takes into account confidentiality, authentication,
integrity, secure key exchange and protection mechanism against
replay attack also. For the connectionless security services on packet
basis, IETF IPsec Working Group has standardized two extension
headers (AH&ESP), key exchange and authentication protocols. It is
also working on lightweight key exchange protocol and MIB's for
security management. IPsec technology has been implemented on
various platforms in IPv4 and IPv6, gradually replacing old
application-specific security mechanisms. IPv4 and IPv6 are not
directly compatible, so programs and systems designed to one
standard can not communicate with those designed to the other. We
propose the design and implementation of controlled Internet security
system, which is IPsec-based Internet information security system in
IPv4/IPv6 network and also we show the data of performance
measurement. With the features like improved scalability and
routing, security, ease-of-configuration, and higher performance of
IPv6, the controlled Internet security system provides consistent
security policy and integrated security management on IPsec-based
Internet security system.
Abstract: Clustering categorical data is more complicated than
the numerical clustering because of its special properties. Scalability
and memory constraint is the challenging problem in clustering large
data set. This paper presents an incremental algorithm to cluster the
categorical data. Frequencies of attribute values contribute much in
clustering similar categorical objects. In this paper we propose new
similarity measures based on the frequencies of attribute values and
its cardinalities. The proposed measures and the algorithm are
experimented with the data sets from UCI data repository. Results
prove that the proposed method generates better clusters than the
existing one.
Abstract: Recently, permeable breakwaters have been suggested to overcome the disadvantages of fully protection breakwaters. These protection structures have minor impacts on the coastal environment and neighboring beaches where they provide a more economical protection from waves and currents. For regular waves, a numerical model is used (FLOW-3D, VOF) to investigate the hydraulic performance of a permeable breakwater. The model of permeable breakwater consists of a pair of identical vertical slotted walls with an impermeable upper and lower part, where the draft is a decimal multiple of the total depth. The middle part is permeable with a porosity of 50%. The second barrier is located at distant of 0.5 and 1.5 of the water depth from the first one. The numerical model is validated by comparisons with previous laboratory data and semi-analytical results of the same model. A good agreement between the numerical results and both laboratory data and semi-analytical results has been shown and the results indicate the applicability of the numerical model to reproduce most of the important features of the interaction. Through the numerical investigation, the friction factor of the model is carefully discussed.
Abstract: An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. In this paper, a set of related HPRs is called a cluster and is represented by a HPR-tree. This paper discusses an algorithm based on cumulative learning scenario for dynamic structuring of clusters. The proposed scheme incrementally incorporates new knowledge into the set of clusters from the previous episodes and also maintains summary of clusters as Synopsis to be used in the future episodes. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested incremental structuring of clusters would be useful in mining data streams.
Abstract: Statistical analysis of electrophysiological recordings
obtained under, e.g. tactile, stimulation frequently suggests participation
in the network dynamics of experimentally unobserved “hidden"
neurons. Such interneurons making synapses to experimentally
recorded neurons may strongly alter their dynamical responses to
the stimuli. We propose a mathematical method that formalizes this
possibility and provides an algorithm for inferring on the presence
and dynamics of hidden neurons based on fitting of the experimental
data to spike trains generated by the network model. The model
makes use of Integrate and Fire neurons “chemically" coupled
through exponentially decaying synaptic currents. We test the method
on simulated data and also provide an example of its application to
the experimental recording from the Dorsal Column Nuclei neurons
of the rat under tactile stimulation of a hind limb.
Abstract: In this paper, Wavelet based ANFIS for finding inter
turn fault of generator is proposed. The detector uniquely responds to
the winding inter turn fault with remarkably high sensitivity.
Discrimination of different percentage of winding affected by inter
turn fault is provided via ANFIS having an Eight dimensional input
vector. This input vector is obtained from features extracted from
DWT of inter turn faulty current leaving the generator phase
winding. Training data for ANFIS are generated via a simulation of
generator with inter turn fault using MATLAB. The proposed
algorithm using ANFIS is giving satisfied performance than ANN
with selected statistical data of decomposed levels of faulty current.
Abstract: As research performance in academia is treated as one of indices for national competency, many countries devote much attention and resources to increasing their research performance. Understand the research trend is the basic step to improve the research performance. The goal of this research is to design an analysis system to evaluate research trends from analyzing data from different countries. In this paper, information system researches in Taiwan and other countries, including Asian countries and prominent countries represented by the Group of Eight (G8) is used as example. Our research found the trends are varied in different countries. Our research suggested that Taiwan-s scholars can pay more attention to interdisciplinary applications and try to increase their collaboration with other countries, in order to increase Taiwan's competency in the area of information science.
Abstract: In recent years, response surface methodology (RSM) has
brought many attentions of many quality engineers in different
industries. Most of the published literature on robust design
methodology is basically concerned with optimization of a single
response or quality characteristic which is often most critical to
consumers. For most products, however, quality is multidimensional,
so it is common to observe multiple responses in an experimental
situation. Through this paper interested person will be familiarize
with this methodology via surveying of the most cited technical
papers.
It is believed that the proposed procedure in this study can resolve
a complex parameter design problem with more than two responses.
It can be applied to those areas where there are large data sets and a
number of responses are to be optimized simultaneously. In addition,
the proposed procedure is relatively simple and can be implemented
easily by using ready-made standard statistical packages.
Abstract: Digital Video Terrestrial Broadcasting (DVB-T)
allows combining broadcasting, telephone and data services in one
network. It has facilitated mobile TV broadcasting. Mobile TV
broadcasting is dominated by fragmentation of standards in use in
different continents. In Asia T-DMB and ISDB-T are used while
Europe uses mainly DVB-H and in USA it is MediaFLO. Issues of
royalty for developers of these different incompatible technologies,
investments made and differing local conditions shall make it
difficult to agree on a unified standard in a very near future. Despite
this shortcoming, mobile TV has shown very good market potential.
There are a number of challenges that still exist for regulators,
investors and technology developers but the future looks bright.
There is need for mobile telephone operators to cooperate with
content providers and those operating terrestrial digital broadcasting
infrastructure for mutual benefit.
Abstract: By systematically applying different engineering
methods, difficult financial problems become approachable. Using a
combination of theory and techniques such as wavelet transform,
time series data mining, Markov chain based discrete stochastic
optimization, and evolutionary algorithms, this work formulated a
strategy to characterize and forecast non-linear time series. It
attempted to extract typical features from the volatility data sets of
S&P100 and S&P500 indices that include abrupt drops, jumps and
other non-linearity. As a result, accuracy of forecasting has reached
an average of over 75% surpassing any other publicly available
results on the forecast of any financial index.