Abstract: In this paper, a framework for the simplification and
standardization of metaheuristic related parameter-tuning by applying
a four phase methodology, utilizing Design of Experiments and
Artificial Neural Networks, is presented. Metaheuristics are multipurpose
problem solvers that are utilized on computational optimization
problems for which no efficient problem specific algorithm
exist. Their successful application to concrete problems requires the
finding of a good initial parameter setting, which is a tedious and
time consuming task. Recent research reveals the lack of approach
when it comes to this so called parameter-tuning process. In the
majority of publications, researchers do have a weak motivation for
their respective choices, if any. Because initial parameter settings
have a significant impact on the solutions quality, this course of
action could lead to suboptimal experimental results, and thereby
a fraudulent basis for the drawing of conclusions.
Abstract: This paper reports a distributed mutual exclusion
algorithm for mobile Ad-hoc networks. The network is clustered
hierarchically. The proposed algorithm considers the clustered
network as a logical tree and develops a token passing scheme
to get the mutual exclusion. The performance analysis and
simulation results show that its message requirement is optimal,
and thus the algorithm is energy efficient.
Abstract: We proposed a technique to identify road traffic
congestion levels from velocity of mobile sensors with high accuracy
and consistent with motorists- judgments. The data collection utilized
a GPS device, a webcam, and an opinion survey. Human perceptions
were used to rate the traffic congestion levels into three levels: light,
heavy, and jam. Then the ratings and velocity were fed into a
decision tree learning model (J48). We successfully extracted vehicle
movement patterns to feed into the learning model using a sliding
windows technique. The parameters capturing the vehicle moving
patterns and the windows size were heuristically optimized. The
model achieved accuracy as high as 99.68%. By implementing the
model on the existing traffic report systems, the reports will cover
comprehensive areas. The proposed method can be applied to any
parts of the world.
Abstract: The use of amine mixtures employing
methyldiethanolamine (MDEA), monoethanolamine (MEA), and diethanolamine (DEA) have been investigated for a variety of cases
using a process simulation program called HYSYS. The results show that, at high pressures, amine mixtures have little or no advantage in the cases studied. As the pressure is lowered, it becomes more difficult for MDEA to meet residual gas requirements and mixtures can usually improve plant performance. Since the CO2 reaction rate
with the primary and secondary amines is much faster than with
MDEA, the addition of small amounts of primary or secondary amines to an MDEA based solution should greatly improve the overall reaction rate of CO2 with the amine solution. The addition of MEA caused the CO2 to be absorbed more strongly in the upper portion of the column than for MDEA along. On the other hand,
raising the concentration for MEA to 11%wt, CO2 is almost
completely absorbed in the lower portion of the column. The addition of MEA would be most advantageous.
Thus, in areas where MDEA cannot meet the residual gas
requirements, the use of amine mixtures can usually improve the plant
performance.
Abstract: The structure of retinal vessels is a prominent feature,
that reveals information on the state of disease that are reflected in
the form of measurable abnormalities in thickness and colour.
Vascular structures of retina, for implementation of clinical diabetic
retinopathy decision making system is presented in this paper.
Retinal Vascular structure is with thin blood vessel, whose accuracy
is highly dependent upon the vessel segmentation. In this paper the
blood vessel thickness is automatically detected using preprocessing
techniques and vessel segmentation algorithm. First the capture
image is binarized to get the blood vessel structure clearly, then it is
skeletonised to get the overall structure of all the terminal and
branching nodes of the blood vessels. By identifying the terminal
node and the branching points automatically, the main and branching
blood vessel thickness is estimated. Results are presented and
compared with those provided by clinical classification on 50 vessels
collected from Bejan Singh Eye hospital..
Abstract: Data Envelopment Analysis (DEA) is a methodology
that computes efficiency values for decision making units (DMU) in a
given period by comparing the outputs with the inputs. In many cases,
there are some time lag between the consumption of inputs and the
production of outputs. For a long-term research project, it is hard to
avoid the production lead time phenomenon. This time lag effect
should be considered in evaluating the performance of organizations.
This paper suggests a model to calculate efficiency values for the
performance evaluation problem with time lag. In the experimental
part, the proposed methods are compared with the CCR and an
existing time lag model using the data set of the 21st century frontier
R&D program which is a long-term national R&D program of Korea.
Abstract: This paper examined the influence of matching
students- learning preferences with the teaching methodology
adopted, on their academic performance in an accounting course in
two types of learning environment in one university in Lebanon:
classes with PowerPoint (PPT) vs. conventional classes. Learning
preferences were either for PPT or for Conventional methodology. A
statistically significant increase in academic achievement is found in
the conventionally instructed group as compared to the group taught
with PPT. This low effectiveness of PPT might be attributed to the
learning preferences of Lebanese students. In the PPT group, better
academic performance was found among students with
learning/teaching match as compared with students with
learning/teaching mismatch. Since the majority of students display a
preference for the conventional methodology, the result might
suggest that Lebanese students- performance is not optimized by PPT
in the accounting classrooms, not because of PPT itself, but because
it is not matching the Lebanese students- learning preferences in such
a quantitative course.
Abstract: Different problems may causes distortion of the rotor,
and hence vibration, which is the most severe damage of the turbine
rotors. In many years different techniques have been developed for
the straightening of bent rotors. The method for straightening can be
selected according to initial information from preliminary inspections
and tests such as nondestructive tests, chemical analysis, run out tests
and also a knowledge of the shaft material. This article covers the
various causes of excessive bends and then some applicable common
straightening methods are reviewed. Finally, hot spotting is opted for
a particular bent rotor. A 325 MW steam turbine rotor is modeled and
finite element analyses are arranged to investigate this straightening
process. Results of experimental data show that performing the exact
hot spot straightening process reduced the bending of the rotor
significantly.
Abstract: A structural study of an aqueous electrolyte whose
experimental results are available. It is a solution of LiCl-6H2O type
at glassy state (120K) contrasted with pure water at room temperature
by means of Partial Distribution Functions (PDF) issue from neutron
scattering technique. Based on these partial functions, the Reverse
Monte Carlo method (RMC) computes radial and angular correlation
functions which allow exploring a number of structural features of
the system. The obtained curves include some artifacts. To remedy
this, we propose to introduce a screened potential as an additional
constraint. Obtained results show a good matching between
experimental and computed functions and a significant improvement
in PDFs curves with potential constraint. It suggests an efficient fit of
pair distribution functions curves.
Abstract: Quality control charts are very effective in detecting
out of control signals but when a control chart signals an out of
control condition of the process mean, searching for a special cause
in the vicinity of the signal time would not always lead to prompt
identification of the source(s) of the out of control condition as the
change point in the process parameter(s) is usually different from the
signal time. It is very important to manufacturer to determine at what
point and which parameters in the past caused the signal. Early
warning of process change would expedite the search for the special
causes and enhance quality at lower cost. In this paper the quality
variables under investigation are assumed to follow a multivariate
normal distribution with known means and variance-covariance
matrix and the process means after one step change remain at the new
level until the special cause is being identified and removed, also it is
supposed that only one variable could be changed at the same time.
This research applies artificial neural network (ANN) to identify the
time the change occurred and the parameter which caused the change
or shift. The performance of the approach was assessed through a
computer simulation experiment. The results show that neural
network performs effectively and equally well for the whole shift
magnitude which has been considered.
Abstract: This paper is concerned with the delay-distributiondependent
stability criteria for bidirectional associative memory
(BAM) neural networks with time-varying delays. Based on the
Lyapunov-Krasovskii functional and stochastic analysis approach,
a delay-probability-distribution-dependent sufficient condition is derived
to achieve the globally asymptotically mean square stable of
the considered BAM neural networks. The criteria are formulated in
terms of a set of linear matrix inequalities (LMIs), which can be
checked efficiently by use of some standard numerical packages. Finally,
a numerical example and its simulation is given to demonstrate
the usefulness and effectiveness of the proposed results.
Abstract: Clean air in subway station is important to passengers. The Platform Screen Doors (PSDs) can improve indoor air quality in the subway station; however the air quality in the subway tunnel is degraded. The subway tunnel has high CO2 concentration and indoor particulate matter (PM) value. The Indoor Air Quality (IAQ) level in subway environment degrades by increasing the frequency of the train operation and the number of the train. The ventilation systems of the subway tunnel need improvements to have better air-quality. Numerical analyses might be effective tools to analyze the performance of subway twin-track tunnel ventilation systems. An existing subway twin-track tunnel in the metropolitan Seoul subway system is chosen for the numerical simulations. The ANSYS CFX software is used for unsteady computations of the airflow inside the twin-track tunnel when the train moves. The airflow inside the tunnel is simulated when one train runs and two trains run at the same time in the tunnel. The piston-effect inside the tunnel is analyzed when all shafts function as the natural ventilation shaft. The supplied air through the shafts is mixed with the pollutant air in the tunnel. The pollutant air is exhausted by the mechanical ventilation shafts. The supplied and discharged airs are balanced when only one train runs in the twin-track tunnel. The pollutant air in the tunnel is high when two trains run simultaneously in opposite direction and all shafts functioned as the natural shaft cases when there are no electrical power supplies in the shafts. The remained pollutant air inside the tunnel enters into the station platform when the doors are opened.
Abstract: In this paper, parallelism in the solution of Ordinary
Differential Equations (ODEs) to increase the computational speed is
studied. The focus is the development of parallel algorithm of the two
point Block Backward Differentiation Formulas (PBBDF) that can
take advantage of the parallel architecture in computer technology.
Parallelism is obtained by using Message Passing Interface (MPI).
Numerical results are given to validate the efficiency of the PBBDF
implementation as compared to the sequential implementation.
Abstract: In this paper, based on the estimation of the Cauchy matrix of linear impulsive differential equations, by using Banach fixed point theorem and Gronwall-Bellman-s inequality, some sufficient conditions are obtained for the existence and exponential stability of almost periodic solution for Cohen-Grossberg shunting inhibitory cellular neural networks (SICNNs) with continuously distributed delays and impulses. An example is given to illustrate the main results.
Abstract: Airport capacity has always been perceived in the
traditional sense as the number of aircraft operations during a
specified time corresponding to a tolerable level of average delay and
it mostly depends on the airside characteristics, on the fleet mix
variability and on the ATM. The adoption of the Directive
2002/30/EC in the EU countries drives the stakeholders to conceive
airport capacity in a different way though. Airport capacity in this
sense is fundamentally driven by environmental criteria, and since
acoustical externalities represent the most important factors, those are
the ones that could pose a serious threat to the growth of airports and
to aviation market itself in the short-medium term. The importance of
the regional airports in the deregulated market grew fast during the
last decade since they represent spokes for network carriers and a
preferential destination for low-fares carriers. Not only regional
airports have witnessed a fast and unexpected growth in traffic but
also a fast growth in the complaints for the nuisance by the people
living near those airports. In this paper the results of a study
conducted in cooperation with the airport of Bologna G. Marconi are
presented in order to investigate airport acoustical capacity as a defacto
constraint of airport growth.
Abstract: Overcurrent (OC) relays are the major protection
devices in a distribution system. The operating time of the OC relays
are to be coordinated properly to avoid the mal-operation of the
backup relays. The OC relay time coordination in ring fed
distribution networks is a highly constrained optimization problem
which can be stated as a linear programming problem (LPP). The
purpose is to find an optimum relay setting to minimize the time of
operation of relays and at the same time, to keep the relays properly
coordinated to avoid the mal-operation of relays.
This paper presents two phase simplex method for optimum time
coordination of OC relays. The method is based on the simplex
algorithm which is used to find optimum solution of LPP. The
method introduces artificial variables to get an initial basic feasible
solution (IBFS). Artificial variables are removed using iterative
process of first phase which minimizes the auxiliary objective
function. The second phase minimizes the original objective function
and gives the optimum time coordination of OC relays.
Abstract: The reduction in vehicle exhaust emissions achieved
in the last two decades is offset by the growth in traffic, as well as by
changes in the composition of emitted pollutants. The present
investigation illustrates the emissions of in-use gasoline and diesel
passenger cars using the official European driving cycle and the
ARTEMIS real-world driving cycle. It was observed that some of the
vehicles do not comply with the corresponding regulations.
Significant differences in emissions were observed between driving
cycles. Not all pollutants showed a tendency to decrease from Euro 3
to Euro 5.
Abstract: The major building block of most elliptic curve cryptosystems
are computation of multi-scalar multiplication. This paper
proposes a novel algorithm for simultaneous multi-scalar multiplication,
that is by employing addition chains. The previously known
methods utilizes double-and-add algorithm with binary representations.
In order to accomplish our purpose, an efficient empirical
method for finding addition chains for multi-exponents has been
proposed.
Abstract: Antimicrobial resistant is becoming a major factor in
virtually all hospital acquired infection may soon untreatable is a
serious public health problem. These concerns have led to major
research effort to discover alternative strategies for the treatment of
bacterial infection. Nanobiotehnology is an upcoming and fast
developing field with potential application for human welfare. An
important area of nanotechnology for development of reliable and
environmental friendly process for synthesis of nanoscale particles
through biological systems In the present studies are reported on the
use of fungal strain Aspergillus species for the extracellular synthesis
of bionanoparticles from 1 mM silver nitrate (AgNO3) solution. The
report would be focused on the synthesis of metallic bionanoparticles
of silver using a reduction of aqueous Ag+ ion with the
culture supernatants of Microorganisms. The bio-reduction of the
Ag+ ions in the solution would be monitored in the aqueous
component and the spectrum of the solution would measure through
UV-visible spectrophotometer The bionanoscale particles were
further characterized by Atomic Force Microscopy (AFM), Fourier
Transform Infrared Spectroscopy (FTIR) and Thin layer
chromatography. The synthesized bionanoscale particle showed a
maximum absorption at 385 nm in the visible region. Atomic Force
Microscopy investigation of silver bionanoparticles identified that
they ranged in the size of 250 nm - 680 nm; the work analyzed the
antimicrobial efficacy of the silver bionanoparticles against various
multi drug resistant clinical isolates. The present Study would be
emphasizing on the applicability to synthesize the metallic
nanostructures and to understand the biochemical and molecular
mechanism of nanoparticles formation by the cell filtrate in order to
achieve better control over size and polydispersity of the
nanoparticles. This would help to develop nanomedicine against
various multi drug resistant human pathogens.
Abstract: Grid networks provide the ability to perform higher throughput computing by taking advantage of many networked computer-s resources to solve large-scale computation problems. As the popularity of the Grid networks has increased, there is a need to efficiently distribute the load among the resources accessible on the network. In this paper, we present a stochastic network system that gives a distributed load-balancing scheme by generating almost regular networks. This network system is self-organized and depends only on local information for load distribution and resource discovery. The in-degree of each node is refers to its free resources, and job assignment and resource discovery processes required for load balancing is accomplished by using fitted random sampling. Simulation results show that the generated network system provides an effective, scalable, and reliable load-balancing scheme for the distributed resources accessible on Grid networks.