Abstract: The necessity of solving multi dimensional
complicated scientific problems beside the necessity of several
objective functions optimization are the most motive reason of born
of artificial intelligence and heuristic methods.
In this paper, we introduce a new method for multiobjective
optimization based on learning automata. In the proposed method,
search space divides into separate hyper-cubes and each cube is
considered as an action. After gathering of all objective functions
with separate weights, the cumulative function is considered as the
fitness function. By the application of all the cubes to the cumulative
function, we calculate the amount of amplification of each action and
the algorithm continues its way to find the best solutions. In this
Method, a lateral memory is used to gather the significant points of
each iteration of the algorithm. Finally, by considering the
domination factor, pareto front is estimated. Results of several
experiments show the effectiveness of this method in comparison
with genetic algorithm based method.
Abstract: Evolutionary robotics is concerned with the design of
intelligent systems with life-like properties by means of simulated
evolution. Approaches in evolutionary robotics can be categorized
according to the control structures that represent the behavior and the
parameters of the controller that undergo adaptation. The basic idea
is to automatically synthesize behaviors that enable the robot to
perform useful tasks in complex environments. The evolutionary
algorithm searches through the space of parameterized controllers
that map sensory perceptions to control actions, thus realizing a
specific robotic behavior. Further, the evolutionary algorithm
maintains and improves a population of candidate behaviors by
means of selection, recombination and mutation. A fitness function
evaluates the performance of the resulting behavior according to the
robot-s task or mission. In this paper, the focus is in the use of
genetic algorithms to solve a multi-objective optimization problem
representing robot behaviors; in particular, the A-Compander Law is
employed in selecting the weight of each objective during the
optimization process. Results using an adaptive fitness function show
that this approach can efficiently react to complex tasks under
variable environments.
Abstract: Transmission network expansion planning (TNEP) is a basic part of power system planning that determines where, when and how many new transmission lines should be added to the network. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, lines adequacy rate has not been considered at the end of planning horizon, i.e., expanded network misses adequacy after some times and needs to be expanded again. In this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using genetic algorithm. Expanded network will possess a maximum adequacy to provide load demand and also the transmission lines overloaded later. Finally, adequacy index could be defined and used to compare some designs that have different investment costs and adequacy rates. In this paper, the proposed idea has been tested on the Garvers network. The results show that the network will possess maximum efficiency economically.
Abstract: This paper mainly proposes an efficient modified
particle swarm optimization (MPSO) method, to identify a slidercrank
mechanism driven by a field-oriented PM synchronous motor.
In system identification, we adopt the MPSO method to find
parameters of the slider-crank mechanism. This new algorithm is
added with “distance" term in the traditional PSO-s fitness function to
avoid converging to a local optimum. It is found that the comparisons
of numerical simulations and experimental results prove that the
MPSO identification method for the slider-crank mechanism is
feasible.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency