Abstract: Artificial Intelligence (AI) methods are increasingly being used for problem solving. This paper concerns using AI-type learning machines for power quality problem, which is a problem of general interest to power system to provide quality power to all appliances. Electrical power of good quality is essential for proper operation of electronic equipments such as computers and PLCs. Malfunction of such equipment may lead to loss of production or disruption of critical services resulting in huge financial and other losses. It is therefore necessary that critical loads be supplied with electricity of acceptable quality. Recognition of the presence of any disturbance and classifying any existing disturbance into a particular type is the first step in combating the problem. In this work two classes of AI methods for Power quality data mining are studied: Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs). We show that SVMs are superior to ANNs in two critical respects: SVMs train and run an order of magnitude faster; and SVMs give higher classification accuracy.
Abstract: Several approaches such as linear programming, network
modeling, greedy heuristic and decision support system are well-known
approaches in solving irregular airline operation problem. This paper
presents an alternative approach based on Multi Objective Micro Genetic
Algorithm. The aim of this research is to introduce the concept of Multi
Objective Micro Genetic Algorithm as a tool to solve irregular airline
operation, combine and reroute problem. The experiment result indicated
that the model could obtain optimal solutions within a few second.
Abstract: This paper describes a new approach of classification
using genetic programming. The proposed technique consists of
genetically coevolving a population of non-linear transformations on
the input data to be classified, and map them to a new space with a
reduced dimension, in order to get a maximum inter-classes
discrimination. The classification of new samples is then performed
on the transformed data, and so become much easier. Contrary to the
existing GP-classification techniques, the proposed one use a
dynamic repartition of the transformed data in separated intervals, the
efficacy of a given intervals repartition is handled by the fitness
criterion, with a maximum classes discrimination. Experiments were
first performed using the Fisher-s Iris dataset, and then, the KDD-99
Cup dataset was used to study the intrusion detection and
classification problem. Obtained results demonstrate that the
proposed genetic approach outperform the existing GP-classification
methods [1],[2] and [3], and give a very accepted results compared to
other existing techniques proposed in [4],[5],[6],[7] and [8].
Abstract: Transmission network expansion planning (TNEP) is a basic part of power system planning that determines where, when and how many new transmission lines should be added to the network. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, lines adequacy rate has not been considered at the end of planning horizon, i.e., expanded network misses adequacy after some times and needs to be expanded again. In this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using genetic algorithm. Expanded network will possess a maximum adequacy to provide load demand and also the transmission lines overloaded later. Finally, adequacy index could be defined and used to compare some designs that have different investment costs and adequacy rates. In this paper, the proposed idea has been tested on the Garvers network. The results show that the network will possess maximum efficiency economically.
Abstract: Optimal routing in communication networks is a
major issue to be solved. In this paper, the application of Tabu Search
(TS) in the optimum routing problem where the aim is to minimize
the computational time and improvement of quality of the solution in
the communication have been addressed. The goal is to minimize the
average delays in the communication. The effectiveness of Tabu
Search method is shown by the results of simulation to solve the
shortest path problem. Through this approach computational cost can
be reduced.
Abstract: The present work describes a computational study of
aerodynamic characteristics of GLC305 airfoil clean and with 16.7
min ice shape (rime 212) and 22.5 min ice shape (glaze 944).The
performance of turbulence models SA, Kε, Kω Std, and Kω SST
model are observed against experimental flow fields at different
Mach numbers 0.12, 0.21, 0.28 in a range of Reynolds numbers
3x106, 6x106, and 10.5x106 on clean and iced aircraft airfoil
GLC305. Numerical predictions include lift, drag and pitching
moment coefficients at different Mach numbers and at different angle
of attacks were done. Accuracy of solutions with respect to the
effects of turbulence models, variation of Mach number, initial
conditions, grid resolution and grid spacing near the wall made the
study much sensitive. Navier Stokes equation based computational
technique is used. Results are very close to the experimental results.
It has seen that SA and SST models are more efficient than Kε and
Kω standard in under study problem.
Abstract: The paper addresses a problem of optimal staffing in
open shop environment. The problem is to determine the optimal
number of operators serving a given number of machines to fulfill the
number of independent operations while minimizing staff idle. Using
a Gantt chart presentation of the problem it is modeled as twodimensional
cutting stock problem. A mixed-integer programming
model is used to get minimal job processing time (makespan) for
fixed number of machines' operators. An algorithm for optimal openshop
staffing is developed based on iterative solving of the
formulated optimization task. The execution of the developed
algorithm provides optimal number of machines' operators in the
sense of minimum staff idle and optimal makespan for that number of
operators. The proposed algorithm is tested numerically for a real life
staffing problem. The testing results show the practical applicability
for similar open shop staffing problems.
Abstract: In pressure vessels contain hydrogen, the role of
hydrogen will be important because of hydrogen cracking problem. It
is difficult to predict what is happened in metallurgical field spite of a
lot of studies have been searched. The main role in controlling the
mass diffusion as driving force is related to stress. In this study, finite
element analysis is implemented to estimate material-s behavior
associated with hydrogen embrittlement. For this purpose, one model
of a pressure vessel is introduced that it has definite boundary and
initial conditions. In fact, finite element is employed to solve the
sequentially coupled mass diffusion with stress near a crack front in a
pressure vessel. Modeling simulation intergrarnular fracture of AISI
4135 steel due to hydrogen is investigated. So, distribution of
hydrogen and stress are obtained and they indicate that their
maximum amounts occur near the crack front. This phenomenon is
happened exactly the region between elastic and plastic field.
Therefore, hydrogen is highly mobile and can diffuse through crystal
lattice so that this zone is potential to trap high volume of hydrogen.
Consequently, crack growth and fast fracture will be happened.
Abstract: Due to their high power-to-weight ratio and low cost, pneumatic actuators are attractive for robotics and automation applications; however, achieving fast and accurate control of their position have been known as a complex control problem. The paper presents a methodology for obtaining controllers that achieve high position accuracy and preserve the closed-loop characteristics over a broad operating range. Experimentation with a number of conventional (or "classical") three-term controllers shows that, as repeated operations accumulate, the characteristics of the pneumatic actuator change requiring frequent re-tuning of the controller parameters (PID gains). Furthermore, three-term controllers are found to perform poorly in recovering the closed-loop system after the application of load or other external disturbances. The key reason for these problems lies in the non-linear exchange of energy inside the cylinder relating, in particular, to the complex friction forces that develop on the piston-wall interface. In order to overcome this problem but still remain within the boundaries of classical control methods, we designed an auto selective classicaql controller so that the system performance would benefit from all three control gains (KP, Kd, Ki) according to system requirements and the characteristics of each type of controller. This challenging experimentation took place for consistent performance in the face of modelling imprecision and disturbances. In the work presented, a selective PID controller is presented for an experimental rig comprising an air cylinder driven by a variable-opening pneumatic valve and equipped with position and pressure sensors. The paper reports on tests carried out to investigate the capability of this specific controller to achieve consistent control performance under, repeated operations and other changes in operating conditions.
Abstract: Sensors possess several properties of physical
measures. Whether devices that convert a sensed signal into an
electrical signal, chemical sensors and biosensors, thus all these
sensors can be considered as an interface between the physical and
electrical equipment. The problem is the analysis of the multitudes of
saved settings as input variables. However, they do not all have the
same level of influence on the outputs. In order to identify the most
sensitive parameters, those that can guide users in gathering
information on the ground and in the process of model calibration
and sensitivity analysis for the effect of each change made.
Mathematical models used for processing become very complex.
In this paper a fuzzy rule-based system is proposed as a solution
for this problem. The system collects the available signals
information from sensors. Moreover, the system allows the study of
the influence of the various factors that take part in the decision
system. Since its inception fuzzy set theory has been regarded as a
formalism suitable to deal with the imprecision intrinsic to many
problems. At the same time, fuzzy sets allow to use symbolic models.
In this study an example was applied for resolving variety of
physiological parameters that define human health state. The
application system was done for medical diagnosis help. The inputs
are the signals expressed the cardiovascular system parameters, blood
pressure, Respiratory system paramsystem was done, it will be able
to predict the state of patient according any input values.
Abstract: Transmission network expansion planning (TNEP) is an important component of power system planning that its task is to minimize the network construction and operational cost while satisfying the demand increasing, imposed technical and economic conditions. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, the lines adequacy rate has not been studied after the planning horizon, i.e. when the expanded network misses its adequacy and needs to be expanded again. In this paper, in order to take transmission lines condition after expansion in to account from the line loading view point, the adequacy of transmission network is considered for solution of STNEP problem. To obtain optimal network arrangement, a decimal codification genetic algorithm (DCGA) is being used for minimizing the network construction and operational cost. The effectiveness of the proposed idea is tested on the Garver's six-bus network. The results evaluation reveals that the annual worth of network adequacy has a considerable effect on the network arrangement. In addition, the obtained network, based on the DCGA, has lower investment cost and higher adequacy rate. Thus, the network satisfies the requirements of delivering electric power more safely and reliably to load centers.
Abstract: Information Retrieval has the objective of studying
models and the realization of systems allowing a user to find the
relevant documents adapted to his need of information. The
information search is a problem which remains difficult because the
difficulty in the representing and to treat the natural languages such
as polysemia. Intentional Structures promise to be a new paradigm to
extend the existing documents structures and to enhance the different
phases of documents process such as creation, editing, search and
retrieval. The intention recognition of the author-s of texts can reduce
the largeness of this problem. In this article, we present intentions
recognition system is based on a semi-automatic method of
extraction the intentional information starting from a corpus of text.
This system is also able to update the ontology of intentions for the
enrichment of the knowledge base containing all possible intentions
of a domain. This approach uses the construction of a semi-formal
ontology which considered as the conceptualization of the intentional
information contained in a text. An experiments on scientific
publications in the field of computer science was considered to
validate this approach.
Abstract: Accurately predicting non-peak traffic is crucial to
daily traffic for all forecasting models. In the paper, least squares
support vector machines (LS-SVMs) are investigated to solve such a
practical problem. It is the first time to apply the approach and analyze
the forecast performance in the domain. For comparison purpose, two
parametric and two non-parametric techniques are selected because of
their effectiveness proved in past research. Having good
generalization ability and guaranteeing global minima, LS-SVMs
perform better than the others. Providing sufficient improvement in
stability and robustness reveals that the approach is practically
promising.
Abstract: Ant Colony Algorithms have been applied to difficult
combinatorial optimization problems such as the travelling salesman
problem and the quadratic assignment problem. In this paper gridbased
and random-based ant colony algorithms are proposed for
automatic 3D hose routing and their pros and cons are discussed. The
algorithm uses the tessellated format for the obstacles and the
generated hoses in order to detect collisions. The representation of
obstacles and hoses in the tessellated format greatly helps the
algorithm towards handling free-form objects and speeds up
computation. The performance of algorithm has been tested on a
number of 3D models.
Abstract: Many high-risk pathogens that cause disease in
humans are transmitted through various food items. Food-borne
disease constitutes a major public health problem. Assessment of the
quality and safety of foods is important in human health. Rapid and
easy detection of pathogenic organisms will facilitate precautionary
measures to maintain healthy food. The Polymerase Chain Reaction
(PCR) is a handy tool for rapid detection of low numbers of bacteria.
We have designed gene specific primers for most common food
borne pathogens such as Staphylococci, Salmonella and E.coli.
Bacteria were isolated from food samples of various food outlets and
identified using gene specific PCRs. We identified Staphylococci,
Salmonella and E.coli O157 using gene specific primers by rapid and
direct PCR technique in various food samples. This study helps us in
getting a complete picture of the various pathogens that threaten to
cause and spread food borne diseases and it would also enable
establishment of a routine procedure and methodology for rapid
identification of food borne bacteria using the rapid technique of
direct PCR. This study will also enable us to judge the efficiency of
present food safety steps taken by food manufacturers and exporters.
Abstract: The extensive number of engineering drawing will be referred for planning process and the changes will produce a good engineering design to meet the demand in producing a new model. The advantage in reuse of engineering designs is to allow continuous product development to further improve the quality of product development, thus reduce the development costs. However, to retrieve the existing engineering drawing, it is time consuming, a complex process and are expose to errors. Engineering drawing file searching system will be proposed to solve this problem. It is essential for engineer and designer to have some sort of medium to enable them to search for drawing in the most effective way. This paper lays out the proposed research project under the area of information extraction in engineering drawing.
Abstract: This paper describes the optimization of a complex
dairy farm simulation model using two quite different methods of
optimization, the Genetic algorithm (GA) and the Lipschitz
Branch-and-Bound (LBB) algorithm. These techniques have been
used to improve an agricultural system model developed by Dexcel
Limited, New Zealand, which describes a detailed representation of
pastoral dairying scenarios and contains an 8-dimensional parameter
space. The model incorporates the sub-models of pasture growth and
animal metabolism, which are themselves complex in many cases.
Each evaluation of the objective function, a composite 'Farm
Performance Index (FPI)', requires simulation of at least a one-year
period of farm operation with a daily time-step, and is therefore
computationally expensive. The problem of visualization of the
objective function (response surface) in high-dimensional spaces is
also considered in the context of the farm optimization problem.
Adaptations of the sammon mapping and parallel coordinates
visualization are described which help visualize some important
properties of the model-s output topography. From this study, it is
found that GA requires fewer function evaluations in optimization
than the LBB algorithm.
Abstract: In a world worried about water resources with the
shadow of drought and famine looming all around, the quality of
water is as important as its quantity. The source of all concerns is the
constant reduction of per capita quality water for different uses.
Iran With an average annual precipitation of 250 mm compared to
the 800 mm world average, Iran is considered a water scarce country
and the disparity in the rainfall distribution, the limitations of
renewable resources and the population concentration in the margins
of desert and water scarce areas have intensified the problem.
The shortage of per capita renewable freshwater and its poor
quality in large areas of the country, which have saline, brackish or
hard water resources, and the profusion of natural and artificial
pollutant have caused the deterioration of water quality.
Among methods of treatment and use of these waters one can refer
to the application of membrane technologies, which have come into
focus in recent years due to their great advantages. This process is
quite efficient in eliminating multi-capacity ions; and due to the
possibilities of production at different capacities, application as
treatment process in points of use, and the need for less energy in
comparison to Reverse Osmosis processes, it can revolutionize the
water and wastewater sector in years to come. The article studied the
different capacities of water resources in the Persian Gulf and Oman
Sea watershed basins, and processes the possibility of using
nanofiltration process to treat brackish and non-conventional waters
in these basins.
Abstract: Clustering is a very well known technique in data mining. One of the most widely used clustering techniques is the kmeans algorithm. Solutions obtained from this technique depend on the initialization of cluster centers and the final solution converges to local minima. In order to overcome K-means algorithm shortcomings, this paper proposes a hybrid evolutionary algorithm based on the combination of PSO, SA and K-means algorithms, called PSO-SA-K, which can find better cluster partition. The performance is evaluated through several benchmark data sets. The simulation results show that the proposed algorithm outperforms previous approaches, such as PSO, SA and K-means for partitional clustering problem.
Abstract: A DEA model can generally evaluate the performance
using multiple inputs and outputs for the same period. However, it is
hard to avoid the production lead time phenomenon some times, such
as long-term project or marketing activity. A couple of models have
been suggested to capture this time lag issue in the context of DEA.
This paper develops a dual-MPO model to deal with time lag effect in
evaluating efficiency. A numerical example is also given to show that
the proposed model can be used to get efficiency and reference set of
inefficient DMUs and to obtain projected target value of input
attributes for inefficient DMUs to be efficient.