Abstract: Parallel Job Shop Scheduling Problem (JSSP) is a multi-objective and multi constrains NP-optimization problem. Traditional Artificial Intelligence techniques have been widely used; however, they could be trapped into the local minimum without reaching the optimum solution. Thus, we propose a hybrid Artificial Intelligence (AI) model with Discrete Breeding Swarm (DBS) added to traditional AI to avoid this trapping. This model is applied in the cost minimization of the Car Sequencing and Operator Allocation (CSOA) problem. The practical experiment shows that our model outperforms other techniques in cost minimization.
Abstract: In this paper, a basic schematic of fractional dimensional optimization problem is presented. As will be shown, a method is performed based on a relation between roots and tangent lines of function in fractional dimensions for an arbitrary initial point. It is shown that for each polynomial function with order N at least N tangent lines must be existed in fractional dimensions of 0 < α < N+1 which pass exactly through the all roots of the proposed function. Geometrical analysis of tangent lines in fractional dimensions is also presented to clarify more intuitively the proposed method. Results show that with an appropriate selection of fractional dimensions, we can directly find the roots. Method is presented for giving a different direction of optimization problems by the use of fractional dimensions.
Abstract: Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.
Abstract: Crop yield prediction is a paramount issue in
agriculture. The main idea of this paper is to find out efficient
way to predict the yield of corn based meteorological records.
The prediction models used in this paper can be classified into
model-driven approaches and data-driven approaches, according to
the different modeling methodologies. The model-driven approaches are based on crop mechanistic
modeling. They describe crop growth in interaction with their
environment as dynamical systems. But the calibration process of
the dynamic system comes up with much difficulty, because it
turns out to be a multidimensional non-convex optimization problem.
An original contribution of this paper is to propose a statistical
methodology, Multi-Scenarios Parameters Estimation (MSPE), for the
parametrization of potentially complex mechanistic models from a
new type of datasets (climatic data, final yield in many situations).
It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction
is free of the complex biophysical process. But it has some strict
requirements about the dataset.
A second contribution of the paper is the comparison of these
model-driven methods with classical data-driven methods. For this
purpose, we consider two classes of regression methods, methods
derived from linear regression (Ridge and Lasso Regression, Principal
Components Regression or Partial Least Squares Regression) and
machine learning methods (Random Forest, k-Nearest Neighbor,
Artificial Neural Network and SVM regression).
The dataset consists of 720 records of corn yield at county scale
provided by the United States Department of Agriculture (USDA) and
the associated climatic data. A 5-folds cross-validation process and
two accuracy metrics: root mean square error of prediction(RMSEP),
mean absolute error of prediction(MAEP) were used to evaluate the
crop prediction capacity.
The results show that among the data-driven approaches, Random
Forest is the most robust and generally achieves the best prediction
error (MAEP 4.27%). It also outperforms our model-driven approach
(MAEP 6.11%). However, the method to calibrate the mechanistic
model from dataset easy to access offers several side-perspectives.
The mechanistic model can potentially help to underline the stresses
suffered by the crop or to identify the biological parameters of interest
for breeding purposes. For this reason, an interesting perspective is
to combine these two types of approaches.
Abstract: As a rapid growth of digital videos and data
communications, video summarization that provides a shorter version
of the video for fast video browsing and retrieval is necessary.
Key frame extraction is one of the mechanisms to generate video
summary. In general, the extracted key frames should both represent
the entire video content and contain minimum redundancy. However,
most of the existing approaches heuristically select key frames; hence,
the selected key frames may not be the most different frames and/or
not cover the entire content of a video. In this paper, we propose
a method of video summarization which provides the reasonable
objective functions for selecting key frames. In particular, we apply
a statistical dependency measure called quadratic mutual informaion
as our objective functions for maximizing the coverage of the
entire video content as well as minimizing the redundancy among
selected key frames. The proposed key frame extraction algorithm
finds key frames as an optimization problem. Through experiments,
we demonstrate the success of the proposed video summarization
approach that produces video summary with better coverage of
the entire video content while less redundancy among key frames
comparing to the state-of-the-art approaches.
Abstract: This paper develops a method for considering the critical fatigue stress as a constraint in the Bi-directional Evolutionary Structural Optimization (BESO) method. Our aim is to reach an optimal design in which high cycle fatigue failure does not occur for a specific life time. The critical fatigue stress is calculated based on modified Goodman criteria and used as a stress constraint in our topology optimization problem. Since fatigue generally does not occur for compressive stresses, we use the p-norm approach of the stress measurement that considers the highest tensile principal stress in each point as stress measure to calculate the sensitivity numbers. The BESO method has been extended to minimize volume an object subjected to the critical fatigue stress constraint. The optimization results are compared with the results from the compliance minimization problem which shows clearly the merits of our newly developed approach.
Abstract: In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.
Abstract: A non-stationary stochastic optimization methodology
is applied to an OWC (oscillating water column) to find the design
that maximizes the wave energy extraction. Different temporal cycles
are considered to represent the long-term variability of the wave
climate at the site in the optimization problem. The results of the
non-stationary stochastic optimization problem are compared against
those obtained by a stationary stochastic optimization problem. The
comparative analysis reveals that the proposed non-stationary
optimization provides designs with a better fit to reality. However,
the stationarity assumption can be adequate when looking at averaged
system response.
Abstract: This paper focuses on parametric analysis of reinforced concrete structures equipped with supplemental damping braces. Practitioners still luck sufficient data for current design of damper added structures and often reduce the real model to a pure damper braced structure even if this assumption is neither realistic nor conservative. In the present study, the damping brace is modelled as made by a linear supporting brace connected in series with the viscous/hysteretic damper. Deformation capacity of existing structures is usually not adequate to undergo the design earthquake. In spite of this, additional dampers could be introduced strongly limiting structural damage to acceptable values, or in some cases, reducing frame response to elastic behavior. This work is aimed at providing useful considerations for retrofit of existing buildings by means of supplemental damping braces. The study explicitly takes into consideration variability of (a) relative frame to supporting brace stiffness, (b) dampers’ coefficient (viscous coefficient or yielding force) and (c) non-linear frame behavior. Non-linear time history analysis has been run to account for both dampers’ behavior and non-linear plastic hinges modelled by Pivot hysteretic type. Parametric analysis based on previous studies on SDOF or MDOF linear frames provide reference values for nearly optimal damping systems design. With respect to bare frame configuration, seismic response of the damper-added frame is strongly improved, limiting deformations to acceptable values far below ultimate capacity. Results of the analysis also demonstrated the beneficial effect of stiffer supporting braces, thus highlighting inadequacy of simplified pure damper models. At the same time, the effect of variable damping coefficient and yielding force has to be treated as an optimization problem.
Abstract: The crossover probability and mutation probability are the two important factors in genetic algorithm. The adaptive genetic algorithm can improve the convergence performance of genetic algorithm, in which the crossover probability and mutation probability are adaptively designed with the changes of fitness value. We apply adaptive genetic algorithm into a function optimization problem. The numerical experiment represents that adaptive genetic algorithm improves the convergence speed and avoids local convergence.
Abstract: Optimizing the parameters in the controller plays a
vital role in the control theory and its applications. Optimizing the
PID parameters is finding out the best value from the feasible
solutions. Finding the optimal value is an optimization problem.
Inverted Pendulum is a very good platform for control engineers to
verify and apply different logics in the field of control theory. It is
necessary to find an optimization technique for the controller to tune
the values automatically in order to minimize the error within the
given bounds. In this paper, the algorithmic concepts of Harmony
search (HS) and Genetic Algorithm (GA) have been analyzed for the
given range of values. The experimental results show that HS
performs well than GA.
Abstract: In recent decades, probabilistic constrained optimal
control problems have attracted much attention in many research
fields. Although probabilistic constraints are generally intractable
in an optimization problem, several tractable methods haven been
proposed to handle probabilistic constraints. In most methods,
probabilistic constraints are reduced to deterministic constraints
that are tractable in an optimization problem. However, there is a
gap between the transformed deterministic constraints in case of
known and unknown probability distribution. This paper examines
the conservativeness of probabilistic constrained optimization method
for unknown probability distribution. The objective of this paper is
to provide a quantitative assessment of the conservatism for tractable
constraints in probabilistic constrained optimization with unknown
probability distribution.
Abstract: Distributed Generation (DG) can help in reducing the
cost of electricity to the costumer, relieve network congestion and
provide environmentally friendly energy close to load centers. Its
capacity is also scalable and it provides voltage support at distribution
level. Hence, DG placement and penetration level is an important
problem for both the utility and DG owner. DG allocation and capacity
determination is a nonlinear optimization problem. The objective
function of this problem is the minimization of the total loss of the
distribution system. Also high levels of penetration of DG are a new
challenge for traditional electric power systems. This paper presents a
new methodology for the optimal placement of DG and penetration
level of DG in distribution system based on General Algebraic
Modeling System (GAMS) and Genetic Algorithm (GA).
Abstract: Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.
Abstract: Microarray gene expression data play a vital in biological processes, gene regulation and disease mechanism. Biclustering in gene expression data is a subset of the genes indicating consistent patterns under the subset of the conditions. Finding a biclustering is an optimization problem. In recent years, swarm intelligence techniques are popular due to the fact that many real-world problems are increasingly large, complex and dynamic. By reasons of the size and complexity of the problems, it is necessary to find an optimization technique whose efficiency is measured by finding the near optimal solution within a reasonable amount of time. In this paper, the algorithmic concepts of the Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL) and Cuckoo Search (CS) algorithms have been analyzed for the four benchmark gene expression dataset. The experiment results show that CS outperforms PSO and SFL for 3 datasets and SFL give better performance in one dataset. Also this work determines the biological relevance of the biclusters with Gene Ontology in terms of function, process and component.
Abstract: Optimal Power Flow (OPF) problem in electrical power system is considered as a static, non-linear, multi-objective or a single objective optimization problem. This paper presents an algorithm for solving the voltage stability objective reactive power dispatch problem in a power system .The proposed approach employs cat swarm optimization algorithm for optimal settings of RPD control variables. Generator terminal voltages, reactive power generation of the capacitor banks and tap changing transformer setting are taken as the optimization variables. CSO algorithm is tested on standard IEEE 30 bus system and the results are compared with other methods to prove the effectiveness of the new algorithm. As a result, the proposed method is the best for solving optimal reactive power dispatch problem.
Abstract: Portfolio optimization problem has received a lot of attention from both researchers and practitioners over the last six decades. This paper provides an overview of the current state of research in portfolio optimization with the support of mathematical programming techniques. On top of that, this paper also surveys the solution algorithms for solving portfolio optimization models classifying them according to their nature in heuristic and exact methods. To serve these purposes, 40 related articles appearing in the international journal from 2003 to 2013 have been gathered and analyzed. Based on the literature review, it has been observed that stochastic programming and goal programming constitute the highest number of mathematical programming techniques employed to tackle the portfolio optimization problem. It is hoped that the paper can meet the needs of researchers and practitioners for easy references of portfolio optimization.
Abstract: A design problem of non-uniform circular antenna arrays for maximum reduction of both the side lobe level (SLL) and first null beam width (FNBW) is dealt with. This problem is modeled as a simple optimization problem. The method of Firefly algorithm (FFA) is used to determine an optimal set of current excitation weights and antenna inter-element separations that provide radiation pattern with maximum SLL reduction and much improvement on FNBW as well. Circular array antenna laid on x-y plane is assumed. FFA is applied on circular arrays of 8-, 10-, and 12- elements. Various simulation results are presented and hence performances of side lobe and FNBW are analyzed. Experimental results show considerable reductions of both the SLL and FNBW with respect to those of the uniform case and some standard algorithms GA, PSO and SA applied to the same problem.
Abstract: Economic Dispatch is one of the most important power system management tools. It is used to allocate an amount of power generation to the generating units to meet the load demand. The Economic Dispatch problem is a large scale nonlinear constrained optimization problem. In general, heuristic optimization techniques are used to solve non-convex Economic Dispatch problem. In this paper, ideas from Reinforcement Learning are proposed to solve the non-convex Economic Dispatch problem. Q-Learning is a reinforcement learning techniques where each generating unit learn the optimal schedule of the generated power that minimizes the generation cost function. The eligibility traces are used to speed up the Q-Learning process. Q-Learning with eligibility traces is used to solve Economic Dispatch problems with valve point loading effect, multiple fuel options, and power transmission losses.
Abstract: In this paper we propose a simple adaptive algorithm
iteratively solving the unit-norm constrained optimization problem.
Instead of conventional parameter norm based normalization,
the proposed algorithm incorporates scalar normalization which is
computationally much simpler. The analysis of stationary point is
presented to show that the proposed algorithm indeed solves the
constrained optimization problem. The simulation results illustrate
that the proposed algorithm performs as good as conventional ones
while being computationally simpler.