Abstract: The check-in area of airport terminal is one of the
busiest sections at airports at certain periods. The passengers are
subjected to queues and delays during the check-in process. These
delays and queues are due to constraints in the capacity of service
facilities. In this project, the airport terminal is decomposed into
several check-in areas. The airport check-in scheduling problem
requires both a deterministic (integer programming) and stochastic
(simulation) approach. Integer programming formulations are
provided to minimize the total number of counters in each check-in
area under the realistic constraint that counters for one and the same
flight should be adjacent and the desired number of counters
remaining in each area should be fixed during check-in operations.
By using simulation, the airport system can be modeled to study the
effects of various parameters such as number of passengers on a
flight and check-in counter opening and closing time.
Abstract: The Markov decision process (MDP) based
methodology is implemented in order to establish the optimal
schedule which minimizes the cost. Formulation of MDP problem
is presented using the information about the current state of pipe,
improvement cost, failure cost and pipe deterioration model. The
objective function and detailed algorithm of dynamic programming
(DP) are modified due to the difficulty of implementing the
conventional DP approaches. The optimal schedule derived from
suggested model is compared to several policies via Monte
Carlo simulation. Validity of the solution and improvement in
computational time are proved.
Abstract: The present work analyses different parameters of end
milling to minimize the surface roughness for AISI D2 steel. D2 Steel
is generally used for stamping or forming dies, punches, forming
rolls, knives, slitters, shear blades, tools, scrap choppers, tyre
shredders etc. Surface roughness is one of the main indices that
determines the quality of machined products and is influenced by
various cutting parameters. In machining operations, achieving
desired surface quality by optimization of machining parameters, is a
challenging job. In case of mating components the surface roughness
become more essential and is influenced by the cutting parameters,
because, these quality structures are highly correlated and are
expected to be influenced directly or indirectly by the direct effect of
process parameters or their interactive effects (i.e. on process
environment). In this work, the effects of selected process parameters
on surface roughness and subsequent setting of parameters with the
levels have been accomplished by Taguchi’s parameter design
approach. The experiments have been performed as per the
combination of levels of different process parameters suggested by
L9 orthogonal array. Experimental investigation of the end milling of
AISI D2 steel with carbide tool by varying feed, speed and depth of
cut and the surface roughness has been measured using surface
roughness tester. Analyses of variance have been performed for mean
and signal-to-noise ratio to estimate the contribution of the different
process parameters on the process.
Abstract: Wind energy offers a significant advantage such as no
fuel costs and no emissions from generation. However, wind energy
sources are variable and non-dispatchable. The utility grid is able to
accommodate the variability of wind in smaller proportion along with
the daily load. However, at high penetration levels, the variability can
severely impact the utility reserve requirements and the cost
associated with it. In this paper the impact of wind energy is
evaluated in detail in formulating the total utility cost. The objective
is to minimize the overall cost of generation while ensuring the
proper management of the load. Overall cost includes the curtailment
cost, reserve cost and the reliability cost, as well as any other penalty
imposed by the regulatory authority. Different levels of wind
penetrations are explored and the cost impacts are evaluated. As the
penetration level increases significantly, the reliability becomes a
critical question to be answered. Here we increase the penetration
from the wind yet keep the reliability factor within the acceptable
limit provided by NERC. This paper uses an economic dispatch (ED)
model to incorporate wind generation into the power grid. Power
system costs are analyzed at various wind penetration levels using
Linear Programming. The goal of this study is show how the
increases in wind generation will affect power system economics.
Abstract: In this paper, we consider the vehicle routing problem
with mixed fleet of conventional and heterogenous electric vehicles
and time dependent charging costs, denoted VRP-HFCC, in which
a set of geographically scattered customers have to be served by a
mixed fleet of vehicles composed of a heterogenous fleet of Electric
Vehicles (EVs), having different battery capacities and operating
costs, and Conventional Vehicles (CVs). We include the possibility
of charging EVs in the available charging stations during the routes
in order to serve all customers. Each charging station offers charging
service with a known technology of chargers and time dependent
charging costs. Charging stations are also subject to operating time
windows constraints. EVs are not necessarily compatible with all
available charging technologies and a partial charging is allowed.
Intermittent charging at the depot is also allowed provided that
constraints related to the electricity grid are satisfied.
The objective is to minimize the number of employed vehicles and
then minimize the total travel and charging costs.
In this study, we present a Mixed Integer Programming Model and
develop a Charging Routing Heuristic and a Local Search Heuristic
based on the Inject-Eject routine with different insertion methods. All
heuristics are tested on real data instances.
Abstract: ESPRIT-TLS method appears a good choice for high
resolution fault detection in induction machines. It has a very high
effectiveness in the frequency and amplitude identification.
Contrariwise, it presents a high computation complexity which
affects its implementation in real time fault diagnosis. To avoid this
problem, a Fast-ESPRIT algorithm that combined the IIR band-pass
filtering technique, the decimation technique and the original
ESPRIT-TLS method was employed to enhance extracting accurately
frequencies and their magnitudes from the wind stator current with
less computation cost. The proposed algorithm has been applied to
verify the wind turbine machine need in the implementation of an online,
fast, and proactive condition monitoring. This type of remote
and periodic maintenance provides an acceptable machine lifetime,
minimize its downtimes and maximize its productivity. The
developed technique has evaluated by computer simulations under
many fault scenarios. Study results prove the performance of Fast-
ESPRIT offering rapid and high resolution harmonics recognizing
with minimum computation time and less memory cost.
Abstract: This paper presents a methodology using
Gravitational Search Algorithm for optimal placement of Phasor
Measurement Units (PMUs) in order to achieve complete
observability of the power system. The objective of proposed
algorithm is to minimize the total number of PMUs at the power
system buses, which in turn minimize installation cost of the PMUs.
In this algorithm, the searcher agents are collection of masses which
interact with each other using Newton’s laws of gravity and motion.
This new Gravitational Search Algorithm based method has been
applied to the IEEE 14-bus, IEEE 30-bus and IEEE 118-bus test
systems. Case studies reveal optimal number of PMUs with better
observability by proposed method.
Abstract: In orthopedic surgery there are various situations in
which the surgeon needs to implement methods of cutting and
drilling the bone. With this type of procedure the generated friction
leads to a localized increase in temperature, which may lead to the
bone necrosis. Recognizing the importance of studying this
phenomenon, an experimental evaluation of the temperatures
developed during the procedure of drilling bone has been done.
Additionally the influence of the use of the procedure with / without
additional lubrication during drilling of bone has also been done. The
obtained results are presented and discussed and suggests an
advantage in using additional lubrication as a way to minimize the
appearance of bone tissue necrosis during bone drilling procedures.
Abstract: The prepreg process among the CFRP (Carbon Fiber
Reinforced Plastic) forming methods is the short term of
‘Pre-impregnation’, which is widely used for aerospace composites
that require a high quality property such as a fiber-reinforced woven
fabric, in which an epoxy hardening resin is impregnated the reality.
However, that this process requires continuous researches and
developments for its commercialization because the delamination
characteristically develops between the layers when a great weight is
loaded from outside to supplement such demerit, three lamination
methods among the prepreg lamination methods of CFRP were
designed to minimize the delamination between the layers due to
external impacts. Further, the newly designed methods and the
existing lamination methods were analyzed through a mechanical
characteristic test, Interlaminar Shear Strength test. The Interlaminar
Shear Strength test result confirmed that the newly proposed three
lamination methods, i.e. the Roll, Half and Zigzag laminations,
presented more excellent strengths compared to the conventional Ply
lamination. The interlaminar shear strength in the roll method with
relatively dense fiber distribution was approximately 1.75% higher
than that in the existing ply lamination method, and in the half method,
it was approximately 0.78% higher.
Abstract: Assembly line balancing problem is aimed to divide
the tasks among the stations in assembly lines and optimize some
objectives. In assembly lines the workload on stations is different
from each other due to different tasks times and the difference in
workloads between stations can cause blockage or starvation in some
stations in assembly lines. Buffers are used to store the semi-finished
parts between the stations and can help to smooth the assembly
production. The assembly line balancing and buffer sizing problem
can affect the throughput of the assembly lines. Assembly line
balancing and buffer sizing problems have been studied separately in
literature and due to their collective contribution in throughput rate of
assembly lines, balancing and buffer sizing problem are desired to
study simultaneously and therefore they are considered concurrently
in current research. Current research is aimed to maximize
throughput, minimize total size of buffers in assembly line and
minimize workload variations in assembly line simultaneously. A
multi objective optimization objective is designed which can give
better Pareto solutions from the Pareto front and a simple example
problem is solved for assembly line balancing and buffer sizing
simultaneously. Current research is significant for assembly line
balancing research and it can be significant to introduce optimization
approaches which can optimize current multi objective problem in
future.
Abstract: In this paper, we propose a method for three-dimensional
(3-D)-model indexing based on defining a new
descriptor, which we call new descriptor using spherical harmonics.
The purpose of the method is to minimize, the processing time on the
database of objects models and the searching time of similar objects
to request object.
Firstly we start by defining the new descriptor using a new
division of 3-D object in a sphere. Then we define a new distance
which will be used in the search for similar objects in the database.
Abstract: The Simulation based VLSI Implementation of
FELICS (Fast Efficient Lossless Image Compression System)
Algorithm is proposed to provide the lossless image compression and
is implemented in simulation oriented VLSI (Very Large Scale
Integrated). To analysis the performance of Lossless image
compression and to reduce the image without losing image quality
and then implemented in VLSI based FELICS algorithm. In FELICS
algorithm, which consists of simplified adjusted binary code for
Image compression and these compression image is converted in
pixel and then implemented in VLSI domain. This parameter is used
to achieve high processing speed and minimize the area and power.
The simplified adjusted binary code reduces the number of arithmetic
operation and achieved high processing speed. The color difference
preprocessing is also proposed to improve coding efficiency with
simple arithmetic operation. Although VLSI based FELICS
Algorithm provides effective solution for hardware architecture
design for regular pipelining data flow parallelism with four stages.
With two level parallelisms, consecutive pixels can be classified into
even and odd samples and the individual hardware engine is
dedicated for each one. This method can be further enhanced by
multilevel parallelisms.
Abstract: Due to water shortage, application of saline water for
irrigation is an urgent in agriculture. In this study the effect of
calcium and potassium application as additive in saline root media for
reduce salinity adverse effects was investigated on tomato growth in
a hydroponic system with unequal distribution of salts in the root
media, which was divided in to two equal parts containing full
Johnson nutrient solution and 40 mMNaCl solution, alone or in
combination with KCl (6 mM), CaCl2 (4 mM), K+Ca (3+2 mM) or
half-strength Johnson nutrient solution. The root splits were
exchanged every 7 days. Results showed that addition of calcium,
calcium-potassium and nutrition elements equivalent to half the
concentration of Johnson formula to the saline-half of culture media
minimized the reduction in plant growth caused by NaCl, although
addition of potassium to culture media wasn’t effective. The greatest
concentration of sodium was observed at the shoot of treatments
which had smallest growth. According to the results of this study, in
case of dynamic and non-uniform distribution of salts in the root
media, by addition of additive to the saline solution, it would be
possible to use of saline water with no significant growth reduction.
Abstract: Safety is one of the most important considerations
when buying a new car. While active safety aims at avoiding
accidents, passive safety systems such as airbags and seat belts
protect the occupant in case of an accident. In addition to legal
regulations, organizations like Euro NCAP provide consumers with
an independent assessment of the safety performance of cars and
drive the development of safety systems in automobile industry.
Those ratings are mainly based on injury assessment reference values
derived from physical parameters measured in dummies during a car
crash test.
The components and sub-systems of a safety system are designed
to achieve the required restraint performance. Sled tests and other
types of tests are then carried out by car makers and their suppliers
to confirm the protection level of the safety system. A Knowledge
Discovery in Databases (KDD) process is proposed in order to
minimize the number of tests. The KDD process is based on the
data emerging from sled tests according to Euro NCAP specifications.
About 30 parameters of the passive safety systems from different data
sources (crash data, dummy protocol) are first analysed together with
experts opinions. A procedure is proposed to manage missing data
and validated on real data sets. Finally, a procedure is developed to
estimate a set of rough initial parameters of the passive system before
testing aiming at reducing the number of tests.
Abstract: Now-a-days autonomous mobile robots have found
applications in diverse fields. An autonomous robot system must be
able to behave in an intelligent manner to deal with complex and
changing environment. This work proposes the performance of path
planning and navigation of autonomous mobile robot using
Gravitational Search Algorithm (GSA), Simulated Annealing (SA)
and Particle Swarm optimization (PSO) based intelligent controllers
in an unstructured environment. The approach not only finds a valid
collision free path but also optimal one. The main aim of the work is
to minimize the length of the path and duration of travel from a
starting point to a target while moving in an unknown environment
with obstacles without collision. Finally, a comparison is made
between the three controllers, it is found that the path length and time
duration made by the robot using GSA is better than SA and PSO
based controllers for the same work.
Abstract: The method of introducing the proxy interpretation for
sending and receiving requests increase the capability of the server
and our approach UDIV (User-Data Identity Security) to solve the
data and user authentication without extending size of the data makes
better than hybrid IDS (Intrusion Detection System). And at the same
time all the security stages we have framed have to pass through less
through that minimize the response time of the request. Even though
an anomaly detected, before rejecting it the proxy extracts its identity
to prevent it to enter into system. In case of false anomalies, the
request will be reshaped and transformed into legitimate request for
further response. Finally we are holding the normal and abnormal
requests in two different queues with own priorities.
Abstract: This paper proposes a new technique to design a
fixed-structure robust loop shaping controller for the pneumatic
servosystem. In this paper, a new method based on a particle swarm
optimization (PSO) algorithm for tuning the weighting function
parameters to design an H∞ controller is presented. The PSO
algorithm is used to minimize the infinity norm of the transfer
function of the nominal closed loop system to obtain the optimal
parameters of the weighting functions. The optimal stability margin is
used as an objective in PSO for selecting the optimal weighting
parameters; it is shown that the proposed method can simplify the
design procedure of H∞ control to obtain optimal robust controller for
pneumatic servosystem. In addition, the order of the proposed
controller is much lower than that of the conventional robust loop
shaping controller, making it easy to implement in practical works.
Also two-degree-of-freedom (2DOF) control design procedure is
proposed to improve tracking performance in the face of noise and
disturbance. Result of simulations demonstrates the advantages of the
proposed controller in terms of simple structure and robustness
against plant perturbations and disturbances.
Abstract: Predicting earthquakes is an important issue in the
study of geography. Accurate prediction of earthquakes can help
people to take effective measures to minimize the loss of personal
and economic damage, such as large casualties, destruction of
buildings and broken of traffic, occurred within a few seconds.
United States Geological Survey (USGS) science organization
provides reliable scientific information about Earthquake Existed
throughout history & the Preliminary database from the National
Center Earthquake Information (NEIC) show some useful factors to
predict an earthquake in a seismic area like Aleutian Arc in the U.S.
state of Alaska. The main advantage of this prediction method that it
does not require any assumption, it makes prediction according to the
future evolution of the object's time series. The article compares
between simulation data result from trained BP and RBF neural
network versus actual output result from the system calculations.
Therefore, this article focuses on analysis of data relating to real
earthquakes. Evaluation results show better accuracy and higher
speed by using radial basis functions (RBF) neural network.
Abstract: Due to the determination of the pollution status of
fresh resources in the Egyptian territorial waters is very important for
public health; this study was carried out to reveal the levels of heavy
metals in the shellfish and their environment and its relation to the
highly developed industrial activities in those areas. A total of 100
shellfish samples from the Rosetta, Edku, El-Maadiya, Abo-Kir and
El-Max coasts [10 crustaceans (shrimp) and 10 mollusks (oysters)]
were randomly collected from each coast. Additionally, 10 samples
from both the water and the sediment were collected from each coast.
Each collected sample was analyzed for cadmium, chromium,
copper, lead and zinc residues using a Perkin Elmer atomic
absorption spectrophotometer (AAS). The results showed that the
levels of heavy metals were higher in the water and sediment from
Abo-Kir. The heavy metal levels decreased successively for the
Rosetta, Edku, El-Maadiya, and El-Max coasts, and the
concentrations of heavy metals, except copper and zinc, in shellfish
exhibited the same pattern. For the concentration of heavy metals in
shellfish tissue, the highest was zinc and the concentrations decreased
successively for copper, lead, chromium and cadmium for all coasts,
except the Abo-Kir coast, where the chromium level was highest and
the other metals decreased successively for zinc, copper, lead and
cadmium. In Rosetta, chromium was higher only in the mollusks,
while the level of this metal was lower in the crustaceans; this trend
was observed at the Edku, El-Maadiya and El-Max coasts as well.
Herein, we discuss the importance of such contamination for public
health and the sources of shellfish contamination with heavy metals.
We suggest measures to minimize and prevent these pollutants in the
aquatic environment and, furthermore, how to protect humans from
excessive intake.
Abstract: In this paper, we propose a new packing strategy to
find a free resource for run-time mapping of application tasks to
NoC-based Heterogeneous MPSoC. The proposed strategy minimizes
the task mapping time in addition to placing the communicating tasks
close to each other. To evaluate our approach, a comparative study is
carried out for a platform containing single task supported PEs.
Experiments show that our strategy provides better results when
compared to latest dynamic mapping strategies reported in the
literature.