Abstract: At present, intelligent planning in the Graphplan framework is a focus of artificial intelligence. While the Creating or Destroying Objects Planning (CDOP) is one unsolved problem of this field, one of the difficulties, too. In this paper, we study this planning problem and bring forward the idea of transforming objects to propositions, based on which we offer an algorithm, Creating or Destroying Objects in the Graphplan framework (CDOGP). Compared to Graphplan, the new algorithm can solve not only the entire problems that Graphplan do, but also a part of CDOP. It is for the first time that we introduce the idea of object-proposition, and we emphasize the discussion on the representations of creating or destroying objects operator and an algorithm in the Graphplan framework. In addition, we analyze the complexity of this algorithm.
Abstract: Owing the fact that optimization of business process
is a crucial requirement to navigate, survive and even thrive in
today-s volatile business environment, this paper presents a
framework for selecting a best-fit optimization package for solving
complex business problems. Complexity level of the problem and/or
using incorrect optimization software can lead to biased solutions of
the optimization problem. Accordingly, the proposed framework
identifies a number of relevant factors (e.g. decision variables,
objective functions, and modeling approach) to be considered during
the evaluation and selection process. Application domain, problem
specifications, and available accredited optimization approaches are
also to be regarded. A recommendation of one or two optimization
software is the output of the framework which is believed to provide
the best results of the underlying problem. In addition to a set of
guidelines and recommendations on how managers can conduct an
effective optimization exercise is discussed.
Abstract: In molecular biology, microarray technology is widely and successfully utilized to efficiently measure gene activity. If working with less studied organisms, methods to design custom-made microarray probes are available. One design criterion is to select probes with minimal melting temperature variances thus ensuring similar hybridization properties. If the microarray application focuses on the investigation of metabolic pathways, it is not necessary to cover the whole genome. It is more efficient to cover each metabolic pathway with a limited number of genes. Firstly, an approach is presented which minimizes the overall melting temperature variance of selected probes for all genes of interest. Secondly, the approach is extended to include the additional constraints of covering all pathways with a limited number of genes while minimizing the overall variance. The new optimization problem is solved by a bottom-up programming approach which reduces the complexity to make it computationally feasible. The new method is exemplary applied for the selection of microarray probes in order to cover all fungal secondary metabolite gene clusters for Aspergillus terreus.
Abstract: Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.
Abstract: Support Vector Machine (SVM) is a statistical
learning tool developed to a more complex concept of
structural risk minimization (SRM). In this paper, SVM is
applied to signal detection in communication systems in the
presence of channel noise in various environments in the form
of Rayleigh fading, additive white Gaussian background noise
(AWGN), and interference noise generalized as additive color
Gaussian noise (ACGN). The structure and performance of
SVM in terms of the bit error rate (BER) metric is derived and
simulated for these advanced stochastic noise models and the
computational complexity of the implementation, in terms of
average computational time per bit, is also presented. The
performance of SVM is then compared to conventional binary
signaling optimal model-based detector driven by binary
phase shift keying (BPSK) modulation. We show that the
SVM performance is superior to that of conventional matched
filter-, innovation filter-, and Wiener filter-driven detectors,
even in the presence of random Doppler carrier deviation,
especially for low SNR (signal-to-noise ratio) ranges. For
large SNR, the performance of the SVM was similar to that of
the classical detectors. However, the convergence between
SVM and maximum likelihood detection occurred at a higher
SNR as the noise environment became more hostile.
Abstract: Direction of Arrival estimation refers to defining a mathematical function called a pseudospectrum that gives an indication of the angle a signal is impinging on the antenna array. This estimation is an efficient method of improving the quality of service in a communication system by focusing the reception and transmission only in the estimated direction thereby increasing fidelity with a provision to suppress interferers. This improvement is largely dependent on the performance of the algorithm employed in the estimation. Many DOA algorithms exists amongst which are MUSIC, Root-MUSIC and ESPRIT. In this paper, performance of these three algorithms is analyzed in terms of complexity, accuracy as assessed and characterized by the CRLB and memory requirements in various environments and array sizes. It is found that the three algorithms are high resolution and dependent on the operating environment and the array size.
Abstract: In the supply chain management customer is the most
significant component and mass customization is mostly related to
customers because it is the capability of any industry or organization
to deliver highly customized products and its services to the
respective customers with flexibility and integration, providing such
a variety of products that nearly everyone can find what they want.
Today all over the world many companies and markets are facing
varied situations that at one side customers are demanding that their
orders should be completed as quickly as possible while on other
hand it requires highly customized products and services. By
applying mass customization some companies face unwanted cost
and complexity. Now they are realizing that they should completely
examine what kind of customization would be best suited for their
companies. In this paper authors review some approaches and
principles which show effect in supply chain management that can be
adopted and used by companies for quickly meeting the customer
orders at reduced cost, with minimum amount of inventory and
maximum efficiency.
Abstract: Solar energy has a major role in renewable energy
resources. Solar Cell as a basement of solar system has attracted lots
of research. To conduct a study about solar energy system, an
authenticated model is required. Diode base PV models are widely
used by researchers. These models are classified based on the number
of diodes used in them. Single and two-diode models are well
studied. Single-diode models may have two, three or four elements.
In this study, these solar cell models are examined and the simulation
results are compared to each other. All PV models are re-designed in
the Matlab/Simulink software and they examined by certain test
conditions and parameters. This paper provides comparative studies
of these models and it tries to compare the simulation results with
manufacturer-s data sheet to investigate model validity and accuracy.
The results show a four- element single-diode model is accurate and
has moderate complexity in contrast to the two-diode model with
higher complexity and accuracy
Abstract: Model Predictive Control (MPC) is an established control
technique in a wide range of process industries. The reason for
this success is its ability to handle multivariable systems and systems
having input, output or state constraints. Neverthless comparing to
PID controller, the implementation of the MPC in miniaturized
devices like Field Programmable Gate Arrays (FPGA) and microcontrollers
has historically been very small scale due to its complexity in
implementation and its computation time requirement. At the same
time, such embedded technologies have become an enabler for future
manufacturing enterprisers as well as a transformer of organizations
and markets. In this work, we take advantage of these recent advances
in this area in the deployment of one of the most studied and applied
control technique in the industrial engineering. In this paper, we
propose an efficient firmware for the implementation of constrained
MPC in the performed STM32 microcontroller using interior point
method. Indeed, performances study shows good execution speed
and low computational burden. These results encourage to develop
predictive control algorithms to be programmed in industrial standard
processes. The PID anti windup controller was also implemented in
the STM32 in order to make a performance comparison with the
MPC. The main features of the proposed constrained MPC framework
are illustrated through two examples.
Abstract: Money laundering has been described by many as the lifeblood of crime and is a major threat to the economic and social well-being of societies. It has been recognized that the banking system has long been the central element of money laundering. This is in part due to the complexity and confidentiality of the banking system itself. It is generally accepted that effective anti-money laundering (AML) measures adopted by banks will make it tougher for criminals to get their "dirty money" into the financial system. In fact, for law enforcement agencies, banks are considered to be an important source of valuable information for the detection of money laundering. However, from the banks- perspective, the main reason for their existence is to make as much profits as possible. Hence their cultural and commercial interests are totally distinct from that of the law enforcement authorities. Undoubtedly, AML laws create a major dilemma for banks as they produce a significant shift in the way banks interact with their customers. Furthermore, the implementation of the laws not only creates significant compliance problems for banks, but also has the potential to adversely affect the operations of banks. As such, it is legitimate to ask whether these laws are effective in preventing money launderers from using banks, or whether they simply put an unreasonable burden on banks and their customers. This paper attempts to address these issues and analyze them against the background of the Malaysian AML laws. It must be said that effective coordination between AML regulator and the banking industry is vital to minimize problems faced by the banks and thereby to ensure effective implementation of the laws in combating money laundering.
Abstract: We present a novel scheme to evaluate sinusoidal functions with low complexity and high precision using cubic spline interpolation. To this end, two different approaches are proposed to find the interpolating polynomial of sin(x) within the range [- π , π]. The first one deals with only a single data point while the other with two to keep the realization cost as low as possible. An approximation error optimization technique for cubic spline interpolation is introduced next and is shown to increase the interpolator accuracy without increasing complexity of the associated hardware. The architectures for the proposed approaches are also developed, which exhibit flexibility of implementation with low power requirement.
Abstract: Sports Sciences has been historically supported by the positivism idea of science, especially by the mechanistic/reductionist and becomes a field that views experimentation and measurement as the mayor research domains. The disposition to simplify nature and the world by parts has fragmented and reduced the idea of bodyathletes as machine. In this paper we intent to re-think this perception lined by Complexity Theory. We come with the idea of athletes as a reflexive and active being (corporeity-body). Therefore, the construction of a training that considers the cultural, biological, psychological elements regarding the experience of the human corporal movements in a circumspect and responsible way could bring better chances of accomplishment. In the end, we hope to help coaches understand the intrinsic complexity of the body they are training, how better deal with it, and, in the field of a deep globalization among the different types of knowledge, to respect and accepted the peculiarities of knowledge that comprise this area.
Abstract: This paper is mainly concerned with the application of
a novel technique of data interpretation for classifying measurements
of plasma columns in Tokamak reactors for nuclear fusion
applications. The proposed method exploits several concepts derived
from soft computing theory. In particular, Artificial Neural Networks
and Multi-Class Support Vector Machines have been exploited to
classify magnetic variables useful to determine shape and position of
the plasma with a reduced computational complexity. The proposed
technique is used to analyze simulated databases of plasma equilibria
based on ITER geometry configuration. As well as demonstrating the
successful recovery of scalar equilibrium parameters, we show that
the technique can yield practical advantages compared with earlier
methods.
Abstract: Wavelet transform has been extensively used in
machine fault diagnosis and prognosis owing to its strength to deal
with non-stationary signals. The existing Wavelet transform based
schemes for fault diagnosis employ wavelet decomposition of the
entire vibration frequency which not only involve huge
computational overhead in extracting the features but also increases
the dimensionality of the feature vector. This increase in the
dimensionality has the tendency to 'over-fit' the training data and
could mislead the fault diagnostic model. In this paper a novel
technique, envelope wavelet packet transform (EWPT) is proposed in
which features are extracted based on wavelet packet transform of the
filtered envelope signal rather than the overall vibration signal. It not
only reduces the computational overhead in terms of reduced number
of wavelet decomposition levels and features but also improves the
fault detection accuracy. Analytical expressions are provided for the
optimal frequency resolution and decomposition level selection in
EWPT. Experimental results with both actual and simulated machine
fault data demonstrate significant gain in fault detection ability by
EWPT at reduced complexity compared to existing techniques.
Abstract: What influences microsystems (MEMS) and nanosystems (NEMS) innovation teams apart from technology complexity? Based on in-depth interviews with innovators, this research explores the key influences on innovation teams in the early phases of MEMS/NEMS. Projects are rare and may last from 5 to 10 years or more from idea to concept. As fundamental technology development in MEMS/NEMS is highly complex and interdisciplinary by involving expertise from different basic and engineering disciplines, R&D is rather a 'testing of ideas' with many uncertainties than a clearly structured process. The purpose of this study is to explore the innovation teams- environment and give specific insights for future management practices. The findings are grouped into three major areas: people, know-how and experience, and market. The results highlight the importance and differences of innovation teams- composition, transdisciplinary knowledge, project evaluation and management compared to the counterparts from new product development teams.
Abstract: Protein 3D structure prediction has always been an
important research area in bioinformatics. In particular, the
prediction of secondary structure has been a well-studied research
topic. Despite the recent breakthrough of combining multiple
sequence alignment information and artificial intelligence algorithms
to predict protein secondary structure, the Q3 accuracy of various
computational prediction algorithms rarely has exceeded 75%. In a
previous paper [1], this research team presented a rule-based method
called RT-RICO (Relaxed Threshold Rule Induction from Coverings)
to predict protein secondary structure. The average Q3 accuracy on
the sample datasets using RT-RICO was 80.3%, an improvement
over comparable computational methods. Although this demonstrated
that RT-RICO might be a promising approach for predicting
secondary structure, the algorithm-s computational complexity and
program running time limited its use. Herein a parallelized
implementation of a slightly modified RT-RICO approach is
presented. This new version of the algorithm facilitated the testing of
a much larger dataset of 396 protein domains [2]. Parallelized RTRICO
achieved a Q3 score of 74.6%, which is higher than the
consensus prediction accuracy of 72.9% that was achieved for the
same test dataset by a combination of four secondary structure
prediction methods [2].
Abstract: An effort estimation model is needed for softwareintensive
projects that consist of hardware, embedded software or
some combination of the two, as well as high level software
solutions. This paper first focuses on functional decomposition
techniques to measure functional complexity of a computer system
and investigates its impact on system development effort. Later, it
examines effects of technical difficulty and design team capability
factors in order to construct the best effort estimation model. With
using traditional regression analysis technique, the study develops a
system development effort estimation model which takes functional
complexity, technical difficulty and design team capability factors as
input parameters. Finally, the assumptions of the model are tested.
Abstract: This paper proposes a declarative language for
knowledge representation (Ibn Rochd), and its environment of
exploitation (DeGSE). This DeGSE system was designed and
developed to facilitate Ibn Rochd writing applications. The system
was tested on several knowledge bases by ascending complexity,
culminating in a system for recognition of a plant or a tree, and
advisors to purchase a car, for pedagogical and academic guidance,
or for bank savings and credit. Finally, the limits of the language and
research perspectives are stated.
Abstract: This paper investigates the performance of a speech
recognizer in an interactive voice response system for various coded
speech signals, coded by using a vector quantization technique namely
Multi Switched Split Vector Quantization Technique. The process of
recognizing the coded output can be used in Voice banking application.
The recognition technique used for the recognition of the coded speech
signals is the Hidden Markov Model technique. The spectral distortion
performance, computational complexity, and memory requirements of
Multi Switched Split Vector Quantization Technique and the
performance of the speech recognizer at various bit rates have been
computed. From results it is found that the speech recognizer is
showing better performance at 24 bits/frame and it is found that the
percentage of recognition is being varied from 100% to 93.33% for
various bit rates.
Abstract: In this paper the development of a heat exchanger as a
pilot plant for educational purpose is discussed and the use of neural
network for controlling the process is being presented. The aim of the
study is to highlight the need of a specific Pseudo Random Binary
Sequence (PRBS) to excite a process under control. As the neural
network is a data driven technique, the method for data generation
plays an important role. In light of this a careful experimentation
procedure for data generation was crucial task. Heat exchange is a
complex process, which has a capacity and a time lag as process
elements. The proposed system is a typical pipe-in- pipe type heat
exchanger. The complexity of the system demands careful selection,
proper installation and commissioning. The temperature, flow, and
pressure sensors play a vital role in the control performance. The
final control element used is a pneumatically operated control valve.
While carrying out the experimentation on heat exchanger a welldrafted
procedure is followed giving utmost attention towards safety
of the system. The results obtained are encouraging and revealing
the fact that if the process details are known completely as far as
process parameters are concerned and utilities are well stabilized then
feedback systems are suitable, whereas neural network control
paradigm is useful for the processes with nonlinearity and less
knowledge about process. The implementation of NN control
reinforces the concepts of process control and NN control paradigm.
The result also underlined the importance of excitation signal
typically for that process. Data acquisition, processing, and
presentation in a typical format are the most important parameters
while validating the results.