Abstract: This paper focuses on robust design and optimization
of industrial production wastes. Past literatures were reviewed to case
study Clamason Industries Limited (CIL) - a leading ladder-tops
manufacturer. A painstaking study of the firm-s practices at the shop
floor revealed that Over-production, Waiting time, Excess inventory,
and Defects are the major wastes that are impeding their progress and
profitability. Design expert8 software was used to apply Taguchi
robust design and response surface methodology in order to model,
analyse and optimise the wastes cost in CIL. Waiting time and overproduction
rank first and second in contributing to the costs of wastes
in CIL. For minimal wastes cost the control factors of overproduction,
waiting-time, defects and excess-inventory must be set at
0.30, 390.70, 4 and 55.70 respectively for CIL. The optimal value of
cost of wastes for the months studied was 22.3679. Finally, a
recommendation was made that for the company to enhance their
profitability and customer satisfaction, they must adopt the Shingeo
Shingo-s Single Minute Exchange of Dies (SMED), which will
immediately tackle the waste of waiting by drastically reducing their
setup time.
Abstract: This paper investigates the optimization problem of
multi-product aggregate production planning (APP) with fuzzy data.
From a comprehensive viewpoint of conserving the fuzziness of input
information, this paper proposes a method that can completely
describe the membership function of the performance measure. The
idea is based on the well-known Zadeh-s extension principle which
plays an important role in fuzzy theory. In the proposed solution
procedure, a pair of mathematical programs parameterized by
possibility level a is formulated to calculate the bounds of the
optimal performance measure at a . Then the membership function of
the optimal performance measure is constructed by enumerating
different values of a . Solutions obtained from the proposed method
contain more information, and can offer more chance to achieve the
feasible disaggregate plan. This is helpful to the decision-maker in
practical applications.
Abstract: The goal of this paper is to develop a model to
integrate “pricing" and “advertisement" for short life cycle products,
such as branded fashion clothing products. To achieve this goal, we
apply the concept of “Dynamic Pricing". There are two classes of
advertisements, for the brand (regardless of product) and for a
particular product. Advertising the brand affects the demand and
price of all the products. Thus, the model considers all these products
in relation with each other. We develop two different methods to
integrate both types of advertisement and pricing. The first model is
developed within the framework of dynamic programming. However,
due to the complexity of the model, this method cannot be applicable
for large size problems. Therefore, we develop another method,
called hieratical approach, which is capable of handling the real
world problems. Finally, we show the accuracy of this method, both
theoretically and also by simulation.
Abstract: Three dimensional analysis of thermal model in laser
full penetration welding, Nd:YAG, by transparent mode DP600 alloy
steel 1.25mm of thickness and gap of 0.1mm. Three models studied
the influence of thermal dependent temperature properties, thermal
independent temperature and the effect of peak value of specific heat
at phase transformation temperature, AC1, on the transient
temperature. Another seven models studied the influence of
discretization, meshes on the temperature distribution in weld plate.
It is shown that for the effects of thermal properties, the errors less
4% of maximum temperature in FZ and HAZ have identified. The
minimum value of discretization are at least one third increment per
radius for temporal discretization and the spatial discretization
requires two elements per radius and four elements through thickness
of the assembled plate, which therefore represent the minimum
requirements of modeling for the laser welding in order to get
minimum errors less than 5% compared to the fine mesh.
Abstract: This paper proposes a novel game theoretical
technique to address the problem of data object replication in largescale
distributed computing systems. The proposed technique draws
inspiration from computational economic theory and employs the
extended Vickrey auction. Specifically, players in a non-cooperative
environment compete for server-side scarce memory space to
replicate data objects so as to minimize the total network object
transfer cost, while maintaining object concurrency. Optimization of
such a cost in turn leads to load balancing, fault-tolerance and
reduced user access time. The method is experimentally evaluated
against four well-known techniques from the literature: branch and
bound, greedy, bin-packing and genetic algorithms. The experimental
results reveal that the proposed approach outperforms the four
techniques in both the execution time and solution quality.
Abstract: The presented paper is related to the design methods and neutronic characterization of the reactivity control system in the large power unit of Generation IV Gas cooled Fast Reactor – GFR2400. The reactor core is based on carbide pin fuel type with the application of refractory metallic liners used to enhance the fission product retention of the SiCcladding. The heterogeneous design optimization of control rod is presented and the results of rods worth and their interferences in a core are evaluated. In addition, the idea of reflector removal as an additive reactivity management option is investigated and briefly described.
Abstract: As the Textile Industry is the second largest industry
in Egypt and as small and medium-sized enterprises (SMEs) make up
a great portion of this industry therein it is essential to apply the
concept of Cleaner Production for the purpose of reducing pollution.
In order to achieve this goal, a case study concerned with ecofriendly
stone-washing of jeans-garments was investigated. A raw
material-substitution option was adopted whereby the toxic
potassium permanganate and sodium sulfide were replaced by the
environmentally compatible hydrogen peroxide and glucose
respectively where the concentrations of both replaced chemicals
together with the operating time were optimized. In addition, a
process-rationalization option involving four additional processes
was investigated. By means of criteria such as product quality,
effluent analysis, mass and heat balance; and cost analysis with the
aid of a statistical model, a process optimization treatment revealed
that the superior process optima were 50%, 0.15% and 50min for
H2O2 concentration, glucose concentration and time, respectively.
With these values the superior process ought to reduce the annual
cost by about EGP 105 relative to the currently used conventional
method.
Abstract: In this paper we present an autoregressive model with
neural networks modeling and standard error backpropagation
algorithm training optimization in order to predict the gross domestic
product (GDP) growth rate of four countries. Specifically we propose
a kind of weighted regression, which can be used for econometric
purposes, where the initial inputs are multiplied by the neural
networks final optimum weights from input-hidden layer after the
training process. The forecasts are compared with those of the
ordinary autoregressive model and we conclude that the proposed
regression-s forecasting results outperform significant those of
autoregressive model in the out-of-sample period. The idea behind
this approach is to propose a parametric regression with weighted
variables in order to test for the statistical significance and the
magnitude of the estimated autoregressive coefficients and
simultaneously to estimate the forecasts.
Abstract: Recently, distributed generation technologies have received much attention for the potential energy savings and reliability assurances that might be achieved as a result of their widespread adoption. Fueling the attention have been the possibilities of international agreements to reduce greenhouse gas emissions, electricity sector restructuring, high power reliability requirements for certain activities, and concern about easing transmission and distribution capacity bottlenecks and congestion. So it is necessary that impact of these kinds of generators on distribution feeder reconfiguration would be investigated. This paper presents an approach for distribution reconfiguration considering Distributed Generators (DGs). The objective function is summation of electrical power losses A Tabu search optimization is used to solve the optimal operation problem. The approach is tested on a real distribution feeder.
Abstract: In this paper we consider a nonlinear feedback control
called augmented automatic choosing control (AACC) using the
gradient optimization automatic choosing functions for nonlinear
systems. Constant terms which arise from sectionwise linearization
of a given nonlinear system are treated as coefficients of a stable
zero dynamics. Parameters included in the control are suboptimally
selected by expanding a stable region in the sense of Lyapunov
with the aid of the genetic algorithm. This approach is applied to
a field excitation control problem of power system to demonstrate
the splendidness of the AACC. Simulation results show that the new
controller can improve performance remarkably well.
Abstract: This study proposes a multi-response surface
optimization problem (MRSOP) for determining the proper choices
of a process parameter design (PPD) decision problem in a noisy
environment of a grease position process in an electronic industry.
The proposed models attempts to maximize dual process responses
on the mean of parts between failure on left and right processes. The
conventional modified simplex method and its hybridization of the
stochastic operator from the hunting search algorithm are applied to
determine the proper levels of controllable design parameters
affecting the quality performances. A numerical example
demonstrates the feasibility of applying the proposed model to the
PPD problem via two iterative methods. Its advantages are also
discussed. Numerical results demonstrate that the hybridization is
superior to the use of the conventional method. In this study, the
mean of parts between failure on left and right lines improve by
39.51%, approximately. All experimental data presented in this
research have been normalized to disguise actual performance
measures as raw data are considered to be confidential.
Abstract: In this paper we present a new approach to deal with
image segmentation. The fact that a single segmentation result do not
generally allow a higher level process to take into account all the
elements included in the image has motivated the consideration of
image segmentation as a multiobjective optimization problem. The
proposed algorithm adopts a split/merge strategy that uses the result
of the k-means algorithm as input for a quantum evolutionary
algorithm to establish a set of non-dominated solutions. The
evaluation is made simultaneously according to two distinct features:
intra-region homogeneity and inter-region heterogeneity. The
experimentation of the new approach on natural images has proved
its efficiency and usefulness.
Abstract: Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification
Abstract: An edge based local search algorithm, called ELS, is proposed for the maximum clique problem (MCP), a well-known combinatorial optimization problem. ELS is a two phased local search method effectively £nds the near optimal solutions for the MCP. A parameter ’support’ of vertices de£ned in the ELS greatly reduces the more number of random selections among vertices and also the number of iterations and running times. Computational results on BHOSLIB and DIMACS benchmark graphs indicate that ELS is capable of achieving state-of-the-art-performance for the maximum clique with reasonable average running times.
Abstract: A new deployment of the multiple criteria decision
making (MCDM) techniques: the Simple Additive Weighting
(SAW), and the Technique for Order Preference by Similarity to
Ideal Solution (TOPSIS) for portfolio allocation, is demonstrated in
this paper. Rather than exclusive reference to mean and variance as in
the traditional mean-variance method, the criteria used in this
demonstration are the first four moments of the portfolio distribution.
Each asset is evaluated based on its marginal impacts to portfolio
higher moments that are characterized by trapezoidal fuzzy numbers.
Then centroid-based defuzzification is applied to convert fuzzy
numbers to the crisp numbers by which SAW and TOPSIS can be
deployed. Experimental results suggest the similar efficiency of these
MCDM approaches to selecting dominant assets for an optimal
portfolio under higher moments. The proposed approaches allow
investors flexibly adjust their risk preferences regarding higher
moments via different schemes adapting to various (from
conservative to risky) kinds of investors. The other significant
advantage is that, compared to the mean-variance analysis, the
portfolio weights obtained by SAW and TOPSIS are consistently
well-diversified.
Abstract: The main aim of Supply Chain Management (SCM) is
to produce, distribute, logistics and deliver goods and equipment in
right location, right time, right amount to satisfy costumers, with
minimum time and cost waste. So implementing techniques that
reduce project time and cost, and improve productivity and
performance is very important. Emerging technologies such as the
Radio Frequency Identification (RFID) are now making it possible to
automate supply chains in a real time manner and making them more
efficient than the simple supply chain of the past for tracing and
monitoring goods and products and capturing data on movements of
goods and other events. This paper considers concepts, components
and RFID technology characteristics by concentration of warehouse
and inventories management. Additionally, utilization of RFID in the
role of improving information management in supply chain is
discussed. Finally, the facts of installation and this technology-s
results in direction with warehouse and inventory management and
business development will be presented.
Abstract: Both the minimum energy consumption and
smoothness, which is quantified as a function of jerk, are generally
needed in many dynamic systems such as the automobile and the
pick-and-place robot manipulator that handles fragile equipments.
Nevertheless, many researchers come up with either solely
concerning on the minimum energy consumption or minimum jerk
trajectory. This research paper proposes a simple yet very interesting
when combining the minimum energy and jerk of indirect jerks
approaches in designing the time-dependent system yielding an
alternative optimal solution. Extremal solutions for the cost functions
of the minimum energy, the minimum jerk and combining them
together are found using the dynamic optimization methods together
with the numerical approximation. This is to allow us to simulate
and compare visually and statistically the time history of state inputs
employed by combining minimum energy and jerk designs. The
numerical solution of minimum direct jerk and energy problem are
exactly the same solution; however, the solutions from problem of
minimum energy yield the similar solution especially in term of
tendency.
Abstract: This paper describes an automatic algorithm to restore
the shape of three-dimensional (3D) left ventricle (LV) models created
from magnetic resonance imaging (MRI) data using a geometry-driven
optimization approach. Our basic premise is to restore the LV shape
such that the LV epicardial surface is smooth after the restoration. A
geometrical measure known as the Minimum Principle Curvature (κ2)
is used to assess the smoothness of the LV. This measure is used to
construct the objective function of a two-step optimization process.
The objective of the optimization is to achieve a smooth epicardial
shape by iterative in-plane translation of the MRI slices.
Quantitatively, this yields a minimum sum in terms of the magnitude
of κ
2, when κ2 is negative. A limited memory quasi-Newton algorithm,
L-BFGS-B, is used to solve the optimization problem. We tested our
algorithm on an in vitro theoretical LV model and 10 in vivo
patient-specific models which contain significant motion artifacts. The
results show that our method is able to automatically restore the shape
of LV models back to smoothness without altering the general shape of
the model. The magnitudes of in-plane translations are also consistent
with existing registration techniques and experimental findings.
Abstract: The unanticipated brittle fracture of connection of the
steel moment resisting frame (SMRF) occurred in 1994 the Northridge
earthquake. Since then, the researches for the vulnerability of
connection of the existing SMRF and for rehabilitation of those
buildings were conducted. This paper suggests performance-based
optimal seismic retrofit technique using connection upgrade. For
optimal design, a multi-objective genetic algorithm(NSGA-II) is used.
One of the two objective functions is to minimize initial cost and
another objective function is to minimize lifetime seismic damages
cost. The optimal algorithm proposed in this paper is performed
satisfying specified performance objective based on FEMA 356. The
nonlinear static analysis is performed for structural seismic
performance evaluation. A numerical example of SAC benchmark
SMRF is provided using the performance-based optimal seismic
retrofit technique proposed in this paper
Abstract: Fast depth estimation from binocular vision is often
desired for autonomous vehicles, but, most algorithms could not easily
be put into practice because of the much time cost. We present an
image-processing technique that can fast estimate depth image from
binocular vision images. By finding out the lines which present the
best matched area in the disparity space image, the depth can be
estimated. When detecting these lines, an edge-emphasizing filter is
used. The final depth estimation will be presented after the smooth
filter. Our method is a compromise between local methods and global
optimization.