Abstract: The recent trend has been using hybrid approach rather than using a single intelligent technique to solve the problems. In this paper, we describe and discuss a framework to develop enterprise solutions that are backed by intelligent techniques. The framework not only uses intelligent techniques themselves but it is a complete environment that includes various interfaces and components to develop the intelligent solutions. The framework is completely Web-based and uses XML extensively. It can work like shared plat-form to be accessed by multiple developers, users and decision makers.
Abstract: Accurate demand forecasting is one of the most key
issues in inventory management of spare parts. The problem of
modeling future consumption becomes especially difficult for lumpy
patterns, which characterized by intervals in which there is no
demand and, periods with actual demand occurrences with large
variation in demand levels. However, many of the forecasting
methods may perform poorly when demand for an item is lumpy.
In this study based on the characteristic of lumpy demand patterns
of spare parts a hybrid forecasting approach has been developed,
which use a multi-layered perceptron neural network and a
traditional recursive method for forecasting future demands. In the
described approach the multi-layered perceptron are adapted to
forecast occurrences of non-zero demands, and then a conventional
recursive method is used to estimate the quantity of non-zero
demands. In order to evaluate the performance of the proposed
approach, their forecasts were compared to those obtained by using
Syntetos & Boylan approximation, recently employed multi-layered
perceptron neural network, generalized regression neural network
and elman recurrent neural network in this area. The models were
applied to forecast future demand of spare parts of Arak
Petrochemical Company in Iran, using 30 types of real data sets. The
results indicate that the forecasts obtained by using our proposed
mode are superior to those obtained by using other methods.
Abstract: Random Access Memory (RAM) is an important
device in computer system. It can represent the snapshot on how the
computer has been used by the user. With the growth of its
importance, the computer memory has been an issue that has been
discussed in digital forensics. A number of tools have been developed
to retrieve the information from the memory. However, most of the
tools have their limitation in the ability of retrieving the important
information from the computer memory. Hence, this paper is aimed
to discuss the limitation and the setback for two main techniques such
as process signature search and process enumeration. Then, a new
hybrid approach will be presented to minimize the setback in both
individual techniques. This new approach combines both techniques
with the purpose to retrieve the information from the process block
and other objects in the computer memory. Nevertheless, the basic
theory in address translation for x86 platforms will be demonstrated
in this paper.
Abstract: Electroencephalogram (EEG) recordings are often
contaminated with ocular and muscle artifacts. In this paper, the
canonical correlation analysis (CCA) is used as blind source
separation (BSS) technique (BSS-CCA) to decompose the artifact
contaminated EEG into component signals. We combine the BSSCCA
technique with wavelet filtering approach for minimizing both
ocular and muscle artifacts simultaneously, and refer the proposed
method as wavelet enhanced BSS-CCA. In this approach, after
careful visual inspection, the muscle artifact components are
discarded and ocular artifact components are subjected to wavelet
filtering to retain high frequency cerebral information, and then clean
EEG is reconstructed. The performance of the proposed wavelet
enhanced BSS-CCA method is tested on real EEG recordings
contaminated with ocular and muscle artifacts, for which power
spectral density is used as a quantitative measure. Our results suggest
that the proposed hybrid approach minimizes ocular and muscle
artifacts effectively, minimally affecting underlying cerebral activity
in EEG recordings.
Abstract: Exploring an autistic child in Elementary school is a
difficult task that must be fully thought out and the teachers should
be aware of the many challenges they face raising their child
especially the behavioral problems of autistic children. Hence there
arises a need for developing Artificial intelligence (AI)
Contemporary Techniques to help diagnosis to discover autistic
people.
In this research, we suggest designing architecture of expert
system that combine Cognitive Maps (CM) with Case Based
Reasoning technique (CBR) in order to reduce time and costs of
traditional diagnosis process for the early detection to discover
autistic children. The teacher is supposed to enter child's information
for analyzing by CM module. Then, the reasoning processor would
translate the output into a case to be solved a current problem by
CBR module. We will implement a prototype for the model as a
proof of concept using java and MYSQL.
This will be provided a new hybrid approach that will achieve new
synergies and improve problem solving capabilities in AI. And we
will predict that will reduce time, costs, the number of human errors
and make expertise available to more people who want who want to
serve autistic children and their families.
Abstract: This paper proposes new hybrid approaches for face
recognition. Gabor wavelets representation of face images is an
effective approach for both facial action recognition and face
identification. Perform dimensionality reduction and linear
discriminate analysis on the down sampled Gabor wavelet faces can
increase the discriminate ability. Nearest feature space is extended to
various similarity measures. In our experiments, proposed Gabor
wavelet faces combined with extended neural net feature space
classifier shows very good performance, which can achieve 93 %
maximum correct recognition rate on ORL data set without any preprocessing
step.
Abstract: This paper presents anapproach of hybridizing two or more artificial intelligence (AI) techniques which arebeing used to
fuzzify the workstress level ranking and categorize the rating accordingly. The use of two or more techniques (hybrid approach)
has been considered in this case, as combining different techniques may lead to neutralizing each other-s weaknesses generating a
superior hybrid solution. Recent researches have shown that there is a
need for a more valid and reliable tools, for assessing work stress. Thus artificial intelligence techniques have been applied in this
instance to provide a solution to a psychological application. An overview about the novel and autonomous interactive model for analysing work-stress that has been developedusing multi-agent
systems is also presented in this paper. The establishment of the intelligent multi-agent decision analyser (IMADA) using hybridized technique of neural networks and fuzzy logic within the multi-agent based framework is also described.
Abstract: A new hybrid coding method for compressing
animated polygonal meshes is presented. This paper assumes
the simplistic representation of the geometric data: a temporal
sequence of polygonal meshes for each discrete frame of the
animated sequence. The method utilizes a delta coding and an
octree-based method. In this hybrid method, both the octree
approach and the delta coding approach are applied to each
single frame in the animation sequence in parallel. The
approach that generates the smaller encoded file size is chosen
to encode the current frame. Given the same quality
requirement, the hybrid coding method can achieve much
higher compression ratio than the octree-only method or the
delta-only method. The hybrid approach can represent 3D
animated sequences with higher compression factors while
maintaining reasonable quality. It is easy to implement and have
a low cost encoding process and a fast decoding process, which
make it a better choice for real time application.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.
Abstract: User-based Collaborative filtering (CF), one of the
most prevailing and efficient recommendation techniques, provides
personalized recommendations to users based on the opinions of other
users. Although the CF technique has been successfully applied in
various applications, it suffers from serious sparsity problems. The
cloud-model approach addresses the sparsity problems by
constructing the user-s global preference represented by a cloud
eigenvector. The user-based CF approach works well with dense
datasets while the cloud-model CF approach has a greater
performance when the dataset is sparse. In this paper, we present a
hybrid approach that integrates the predictions from both the
user-based CF and the cloud-model CF approaches. The experimental
results show that the proposed hybrid approach can ameliorate the
sparsity problem and provide an improved prediction quality.
Abstract: Rule Discovery is an important technique for mining knowledge from large databases. Use of objective measures for discovering interesting rules lead to another data mining problem, although of reduced complexity. Data mining researchers have studied subjective measures of interestingness to reduce the volume of discovered rules to ultimately improve the overall efficiency of KDD process. In this paper we study novelty of the discovered rules as a subjective measure of interestingness. We propose a hybrid approach that uses objective and subjective measures to quantify novelty of the discovered rules in terms of their deviations from the known rules. We analyze the types of deviation that can arise between two rules and categorize the discovered rules according to the user specified threshold. We implement the proposed framework and experiment with some public datasets. The experimental results are quite promising.
Abstract: Color Image quantization (CQ) is an important
problem in computer graphics, image and processing. The aim of
quantization is to reduce colors in an image with minimum distortion.
Clustering is a widely used technique for color quantization; all
colors in an image are grouped to small clusters. In this paper, we
proposed a new hybrid approach for color quantization using firefly
algorithm (FA) and K-means algorithm. Firefly algorithm is a swarmbased
algorithm that can be used for solving optimization problems.
The proposed method can overcome the drawbacks of both
algorithms such as the local optima converge problem in K-means
and the early converge of firefly algorithm. Experiments on three
commonly used images and the comparison results shows that the
proposed algorithm surpasses both the base-line technique k-means
clustering and original firefly algorithm.
Abstract: One important objective in Precision Agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. In order to reach this goal, two major factors need to be considered: 1) the similar spectral signature, shape and texture between weeds and crops; 2) the irregular distribution of the weeds within the crop's field. This paper outlines an automatic computer vision system for the detection and differential spraying of Avena sterilis, a noxious weed growing in cereal crops. The proposed system involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and the weeds. From these attributes, a hybrid decision making approach determines if a cell must be or not sprayed. The hybrid approach uses the Support Vector Machines and the Fuzzy k-Means methods, combined through the fuzzy aggregation theory. This makes the main finding of this paper. The method performance is compared against other available strategies.
Abstract: This paper presents a hybrid approach for solving nqueen problem by combination of PSO and SA. PSO is a population based heuristic method that sometimes traps in local maximum. To solve this problem we can use SA. Although SA suffer from many iterations and long time convergence for solving some problems, By good adjusting initial parameters such as temperature and the length of temperature stages SA guarantees convergence. In this article we use discrete PSO (due to nature of n-queen problem) to achieve a good local maximum. Then we use SA to escape from local maximum. The experimental results show that our hybrid method in comparison of SA method converges to result faster, especially for high dimensions n-queen problems.
Abstract: Economic dispatch (ED) is considered to be one of the
key functions in electric power system operation. This paper presents
a new hybrid approach based genetic algorithm (GA) to economic
dispatch problems. GA is most commonly used optimizing algorithm
predicated on principal of natural evolution. Utilization of chaotic
queue with GA generates several neighborhoods of near optimal
solutions to keep solution variation. It could avoid the search process
from becoming pre-mature. For the objective of chaotic queue
generation, utilization of tent equation as opposed to logistic equation
results in improvement of iterative speed. The results of the proposed
approach were compared in terms of fuel cost, with existing
differential evolution and other methods in literature.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.
Abstract: Individually Network reconfiguration or Capacitor control
perform well in minimizing power loss and improving voltage
profile of the distribution system. But for heavy reactive power loads
network reconfiguration and for heavy active power loads capacitor
placement can not effectively reduce power loss and enhance
voltage profiles in the system. In this paper, an hybrid approach
that combine network reconfiguration and capacitor placement using
Harmony Search Algorithm (HSA) is proposed to minimize power
loss reduction and improve voltage profile. The proposed approach
is tested on standard IEEE 33 and 16 bus systems. Computational
results show that the proposed hybrid approach can minimize losses
more efficiently than Network reconfiguration or Capacitor control.
The results of proposed method are also compared with results
obtained by Simulated Annealing (SA). The proposed method has
outperformed in terms of the quality of solution compared to SA.
Abstract: Apart from geometry, functionality is one of the most
significant hallmarks of a product. The functionality of a product can
be considered as the fundamental justification for a product
existence. Therefore a functional analysis including a complete and
reliable descriptor has a high potential to improve product
development process in various fields especially in knowledge-based
design. One of the important applications of the functional analysis
and indexing is in retrieval and design reuse concept. More than 75%
of design activity for a new product development contains reusing
earlier and existing design know-how. Thus, analysis and
categorization of product functions concluded by functional
indexing, influences directly in design optimization. This paper
elucidates and evaluates major classes for functional analysis by
discussing their major methods. Moreover it is finalized by
presenting a noble hybrid approach for functional analysis.
Abstract: It is estimated that the total cost of abnormal
conditions to US process industries is around $20 billion dollars in
annual losses. The hydrotreatment (HDT) of diesel fuel in petroleum
refineries is a conversion process that leads to high profitable
economical returns. However, this is a difficult process to control
because it is operated continuously, with high hydrogen pressures
and it is also subject to disturbances in feed properties and catalyst
performance. So, the automatic detection of fault and diagnosis plays
an important role in this context. In this work, a hybrid approach
based on neural networks together with a pos-processing
classification algorithm is used to detect faults in a simulated HDT
unit. Nine classes (8 faults and the normal operation) were correctly
classified using the proposed approach in a maximum time of 5
minutes, based on on-line data process measurements.
Abstract: A power cable is widely used for power supply in
power distributing networks and power transmission lines. Due to
limitations in the production, delivery and setting up power cables,
they are produced and delivered in several separate lengths. Cable
itself, consists of two cable terminations and arbitrary number of
cable joints, depending on the cable route length. Electrical stress
control is needed to prevent a dielectric breakdown at the end of the
insulation shield in both the air and cable insulation. Reliability of
cable joint depends on its materials, design, installation and operating
environment. The paper describes design and performance results for
new modeled cable joints. Design concepts, based on numerical
calculations, must be correct. An Equivalent Electrodes
Method/Boundary Elements Method-hybrid approach that allows
electromagnetic field calculations in multilayer dielectric media,
including inhomogeneous regions, is presented.