Abstract: The objective of this study was to optimize the extraction conditions for phenolic compounds, total flavonoids, and antioxidant activity from Deglet-Nour variety. The extraction of active components from natural sources depends on different factors. The knowledge of the effects of different extraction parameters is useful for the optimization of the process, as well for the ability to predict the extraction yield. The effects of extraction variables, namely types of solvent (methanol, ethanol and acetone) and extraction time (1h, 6h, 12h and 24h) on phenolics extraction yield were evaluated. It has been shown that the time of extraction and types of solvent have a statistically significant influence on the extraction of phenolic compounds from Deglet-Nour variety. The optimised conditions yielded values of 80.19 ± 6.37 mg GAE/100 g FW for TPC, 2.34 ± 0.27 mg QE/100 g FW for TFC and 90.20 ± 1.29% for antioxidant activity were methanol solvent and 6 hours of time. According to the results obtained in this study, Deglet-Nour variety can be considered as a natural source of phenolic compounds with good antioxidant capacity.
Abstract: CO2 capture and storage/sequestration (CCS) is a key technology for addressing the global warming issue. This paper proposes an integrated model for the whole chain of CCS, from a power plant to a reservoir. The integrated model is further utilized to determine optimal operating conditions and study responses to various changes in input variables.
Abstract: This paper focuses on the operational and strategic planning decisions related to the quayside of container terminals. We introduce an integrated operational research (OR) and system dynamics (SD) approach to solve the Berth Allocation Problem (BAP) and the Quay Crane Assignment Problem (QCAP). A BAP-QCAP optimization modeling approach which considers practical aspects not studied before in the integration of BAP and QCAP is discussed. A conceptual SD model is developed to determine the long-term effect of optimization on the system behavior factors like resource utilization, attractiveness to port, number of incoming vessels to port and port profits. The framework can be used for improving the operational efficiency of container terminals and providing a strategic view after applying optimization.
Abstract: A new Meta heuristic approach called "Randomized gravitational emulation search algorithm (RGES)" for solving large size set covering problems has been designed. This algorithm is found upon introducing randomization concept along with the two of the four primary parameters -velocity- and -gravity- in physics. A new heuristic operator is introduced in the domain of RGES to maintain feasibility specifically for the set covering problem to yield best solutions. The performance of this algorithm has been evaluated on a large set of benchmark problems from OR-library. Computational results showed that the randomized gravitational emulation search algorithm - based heuristic is capable of producing high quality solutions. The performance of this heuristic when compared with other existing heuristic algorithms is found to be excellent in terms of solution quality.
Abstract: A case study of the generation scheduling optimization
of the multi-hydroplants on the Yuan River Basin in China is reported
in this paper. Concerning the uncertainty of the inflows, the
long/mid-term generation scheduling (LMTGS) problem is solved by
a stochastic model in which the inflows are considered as stochastic
variables. For the short-term generation scheduling (STGS) problem, a
constraint violation priority is defined in case not all constraints are
satisfied. Provided the stage-wise separable condition and low
dimensions, the hydroplant-based operational region schedules
(HBORS) problem is solved by dynamic programming (DP). The
coordination of LMTGS and STGS is presented as well. The
feasibility and the effectiveness of the models and solution methods
are verified by the numerical results.
Abstract: Nowadays, more engineering systems are using some
kind of Artificial Intelligence (AI) for the development of their
processes. Some well-known AI techniques include artificial neural
nets, fuzzy inference systems, and neuro-fuzzy inference systems
among others. Furthermore, many decision-making applications base
their intelligent processes on Fuzzy Logic; due to the Fuzzy
Inference Systems (FIS) capability to deal with problems that are
based on user knowledge and experience. Also, knowing that users
have a wide variety of distinctiveness, and generally, provide
uncertain data, this information can be used and properly processed
by a FIS. To properly consider uncertainty and inexact system input
values, FIS normally use Membership Functions (MF) that represent
a degree of user satisfaction on certain conditions and/or constraints.
In order to define the parameters of the MFs, the knowledge from
experts in the field is very important. This knowledge defines the MF
shape to process the user inputs and through fuzzy reasoning and
inference mechanisms, the FIS can provide an “appropriate" output.
However an important issue immediately arises: How can it be
assured that the obtained output is the optimum solution? How can it
be guaranteed that each MF has an optimum shape? A viable solution
to these questions is through the MFs parameter optimization. In this
Paper a novel parameter optimization process is presented. The
process for FIS parameter optimization consists of the five simple
steps that can be easily realized off-line. Here the proposed process
of FIS parameter optimization it is demonstrated by its
implementation on an Intelligent Interface section dealing with the
on-line customization / personalization of internet portals applied to
E-commerce.
Abstract: In the present study the efficiency of Big Bang-Big
Crunch (BB-BC) algorithm is investigated in discrete structural
design optimization. It is shown that a standard version of the BB-BC
algorithm is sometimes unable to produce reasonable solutions to
problems from discrete structural design optimization. Two
reformulations of the algorithm, which are referred to as modified
BB-BC (MBB-BC) and exponential BB-BC (EBB-BC), are
introduced to enhance the capability of the standard algorithm in
locating good solutions for steel truss and frame type structures,
respectively. The performances of the proposed algorithms are
experimented and compared to its standard version as well as some
other algorithms over several practical design examples. In these
examples, steel structures are sized for minimum weight subject to
stress, stability and displacement limitations according to the
provisions of AISC-ASD.
Abstract: The DNA microarray technology concurrently monitors the expression levels of thousands of genes during significant biological processes and across the related samples. The better understanding of functional genomics is obtained by extracting the patterns hidden in gene expression data. It is handled by clustering which reveals natural structures and identify interesting patterns in the underlying data. In the proposed work clustering gene expression data is done through an Advanced Nelder Mead (ANM) algorithm. Nelder Mead (NM) method is a method designed for optimization process. In Nelder Mead method, the vertices of a triangle are considered as the solutions. Many operations are performed on this triangle to obtain a better result. In the proposed work, the operations like reflection and expansion is eliminated and a new operation called spread-out is introduced. The spread-out operation will increase the global search area and thus provides a better result on optimization. The spread-out operation will give three points and the best among these three points will be used to replace the worst point. The experiment results are analyzed with optimization benchmark test functions and gene expression benchmark datasets. The results show that ANM outperforms NM in both benchmarks.
Abstract: Nano fibers produced by electrospinning are of industrial and scientific attention due to their special characteristics such as long length, small diameter and high surface area. Applications of electrospun structures in nanotechnology are included tissue scaffolds, fibers for drug delivery, composite reinforcement, chemical sensing, enzyme immobilization, membrane-based filtration, protective clothing, catalysis, solar cells, electronic devices and others. Many polymer and ceramic precursor nano fibers have been successfully electrospun with diameters in the range from 1 nm to several microns. The process is complex so that fiber diameter is influenced by various material, design and operating parameters. The objective of this work is to apply genetic algorithm on the parameters of electrospinning which have the most significant effect on the nano fiber diameter to determine the optimum parameter values before doing experimental set up. Effective factors including initial polymer concentration, initial jet radius, electrical potential, relaxation time, initial elongation, viscosity and distance between nozzle and collector are considered to determine finest diameter which is selected by user.
Abstract: Unmanned Aerial Vehicles (UAVs) have gained tremendous importance, in both Military and Civil, during first decade of this century. In a UAV, onboard computer (autopilot) autonomously controls the flight and navigation of the aircraft. Based on the aircraft role and flight envelope, basic to complex and sophisticated controllers are used to stabilize the aircraft flight parameters. These controllers constitute the autopilot system for UAVs. The autopilot systems, most commonly, provide lateral and longitudinal control through Proportional-Integral-Derivative (PID) controllers or Phase-lead or Lag Compensators. Various techniques are commonly used to ‘tune’ gains of these controllers. Some techniques used are, in-flight step-by-step tuning, software-in-loop or hardware-in-loop tuning methods. Subsequently, numerous in-flight tests are required to actually ‘fine-tune’ these gains. However, an optimization-based tuning of these PID controllers or compensators, as presented in this paper, can greatly minimize the requirement of in-flight ‘tuning’ and substantially reduce the risks and cost involved in flight-testing.
Abstract: In today-s world, the efficient utilization of wood
resources comes more and more to the mind of forest owners. It is a
very complex challenge to ensure an efficient harvest of the wood
resources. This is one of the scopes the project “Virtual Forest II"
addresses. Its core is a database with data about forests containing
approximately 260 million trees located in North Rhine-Westphalia
(NRW). Based on this data, tree growth simulations and wood
mobilization simulations can be conducted. This paper focuses on the
latter. It describes a discrete-event-simulation with an attached 3-D
real time visualization which simulates timber harvest using trees
from the database with different crop resources. This simulation can
be displayed in 3-D to show the progress of the wood crop. All the
data gathered during the simulation is presented as a detailed
summary afterwards. This summary includes cost-benefit
calculations and can be compared to those of previous runs to
optimize the financial outcome of the timber harvest by exchanging
crop resources or modifying their parameters.
Abstract: Using ab initio theoretical calculations, we present
analysis of fragmentation process. The analysis is performed in two
steps. The first step is calculation of fragmentation energies by ab
initio calculations. The second step is application of the energies to
kinetic description of process. The energies of fragments are
presented in this paper. The kinetics of fragmentation process can be
described by numerical models. The method for kinetic analysis is
described in this paper. The result - composition of fragmentation
products - will be calculated in future. The results from model can be
compared to the concentrations of fragments from mass spectrum.
Abstract: A new reverse phase-high performance liquid chromatography (RP-HPLC) method with fluorescent detector (FLD) was developed and optimized for Norfloxacin determination in human plasma. Mobile phase specifications, extraction method and excitation and emission wavelengths were varied for optimization. HPLC system contained a reverse phase C18 (5 μm, 4.6 mm×150 mm) column with FLD operated at excitation 330 nm and emission 440 nm. The optimized mobile phase consisted of 14% acetonitrile in buffer solution. The aqueous phase was prepared by mixing 2g of citric acid, 2g sodium acetate and 1 ml of triethylamine in 1 L of Milli-Q water was run at a flow rate of 1.2 mL/min. The standard curve was linear for the range tested (0.156–20 μg/mL) and the coefficient of determination was 0.9978. Aceclofenac sodium was used as internal standard. A detection limit of 0.078 μg/mL was achieved. Run time was set at 10 minutes because retention time of norfloxacin was 0.99 min. which shows the rapidness of this method of analysis. The present assay showed good accuracy, precision and sensitivity for Norfloxacin determination in human plasma with a new internal standard and can be applied pharmacokinetic evaluation of Norfloxacin tablets after oral administration in human.
Abstract: This paper presents an algorithm of particle swarm
optimization with reduction for global optimization problems. Particle
swarm optimization is an algorithm which refers to the collective
motion such as birds or fishes, and a multi-point search algorithm
which finds a best solution using multiple particles. Particle
swarm optimization is so flexible that it can adapt to a number
of optimization problems. When an objective function has a lot of
local minimums complicatedly, the particle may fall into a local
minimum. For avoiding the local minimum, a number of particles are
initially prepared and their positions are updated by particle swarm
optimization. Particles sequentially reduce to reach a predetermined
number of them grounded in evaluation value and particle swarm
optimization continues until the termination condition is met. In order
to show the effectiveness of the proposed algorithm, we examine the
minimum by using test functions compared to existing algorithms.
Furthermore the influence of best value on the initial number of
particles for our algorithm is discussed.
Abstract: The motion planning procedure described in this paper has been developed in order to eliminate or reduce the residual vibrations of electromechanical positioning systems, without augmenting the motion time (usually imposed by production requirements), nor introducing overtime for vibration damping. The proposed technique is based on a suitable choice of the motion law assigned to the servomotor that drives the mechanism. The reference profile is defined by a Bezier curve, whose shape can be easily changed by modifying some numerical parameters. By means of an optimization technique these parameters can be modified without altering the continuity conditions imposed on the displacement and on its time derivatives at the initial and final time instants.
Abstract: Microstrip lines, widely used for good reason, are
broadband in frequency and provide circuits that are compact and
light in weight. They are generally economical to produce since they
are readily adaptable to hybrid and monolithic integrated circuit (IC)
fabrication technologies at RF and microwave frequencies. Although,
the existing EM simulation models used for the synthesis and
analysis of microstrip lines are reasonably accurate, they are
computationally intensive and time consuming. Neural networks
recently gained attention as fast and flexible vehicles to microwave
modeling, simulation and optimization. After learning and
abstracting from microwave data, through a process called training,
neural network models are used during microwave design to provide
instant answers to the task learned.This paper presents simple and
accurate ANN models for the synthesis and analysis of Microstrip
lines to more accurately compute the characteristic parameters and
the physical dimensions respectively for the required design
specifications.
Abstract: The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Abstract: The aim of the current work is to present a comparison among three popular optimization methods in the inverse elastostatics problem (IESP) of flaw detection within a solid. In more details, the performance of a simulated annealing, a Hooke & Jeeves and a sequential quadratic programming algorithm was studied in the test case of one circular flaw in a plate solved by both the boundary element (BEM) and the finite element method (FEM). The proposed optimization methods use a cost function that utilizes the displacements of the static response. The methods were ranked according to the required number of iterations to converge and to their ability to locate the global optimum. Hence, a clear impression regarding the performance of the aforementioned algorithms in flaw identification problems was obtained. Furthermore, the coupling of BEM or FEM with these optimization methods was investigated in order to track differences in their performance.
Abstract: The back-propagation algorithm calculates the weight
changes of an artificial neural network, and a two-term algorithm
with a dynamically optimal learning rate and a momentum factor
is commonly used. Recently the addition of an extra term, called a
proportional factor (PF), to the two-term BP algorithm was proposed.
The third term increases the speed of the BP algorithm. However,
the PF term also reduces the convergence of the BP algorithm, and
optimization approaches for evaluating the learning parameters are
required to facilitate the application of the three terms BP algorithm.
This paper considers the optimization of the new back-propagation
algorithm by using derivative information. A family of approaches
exploiting the derivatives with respect to the learning rate, momentum
factor and proportional factor is presented. These autonomously
compute the derivatives in the weight space, by using information
gathered from the forward and backward procedures. The three-term
BP algorithm and the optimization approaches are evaluated using
the benchmark XOR problem.
Abstract: In the present work, the performance of the particle
swarm optimization and the genetic algorithm compared as a typical
geometry design problem. The design maximizes the heat transfer
rate from a given fin volume. The analysis presumes that a linear
temperature distribution along the fin. The fin profile generated using
the B-spline curves and controlled by the change of control point
coordinates. An inverse method applied to find the appropriate fin
geometry yield the linear temperature distribution along the fin
corresponds to optimum design. The numbers of the populations, the
count of iterations and time to convergence measure efficiency.
Results show that the particle swarm optimization is most efficient
for geometry optimization.