Abstract: A numerical analysis of wave and hydrodynamic models
is used to investigate the influence of WAve and Storm Surge
(WASS) in the regional and coastal zones. The numerical analyzed
system consists of the WAve Model Cycle 4 (WAMC4) and the
Princeton Ocean Model (POM) which used to solve the energy
balance and primitive equations respectively. The results of both
models presented the incorporated surface wave in the regional
zone affected the coastal storm surge zone. Specifically, the results
indicated that the WASS generally under the approximation is not
only the peak surge but also the coastal water level drop which
can also cause substantial impact on the coastal environment. The
wave–induced surface stress affected the storm surge can significantly
improve storm surge prediction. Finally, the calibration of wave
module according to the minimum error of the significant wave height
(Hs) is not necessarily result in the optimum wave module in the
WASS analyzed system for the WASS prediction.
Abstract: In this paper, a generalized synchronization scheme, which is called function synchronization, for chaotic systems is studied. Based on Lyapunov method and active control method, we design the synchronization controller for the system such that the error dynamics between master and slave chaotic systems is asymptotically stable. For verification of our theory, computer and circuit simulations for a specific chaotic system is conducted.
Abstract: Coated tool inserts can be considered as the backbone
of machining processes due to their wear and heat resistance.
However, defects of coating can degrade the integrity of these inserts
and the number of these defects should be minimized or eliminated if
possible. Recently, the advancement of coating processes and
analytical tools open a new era for optimizing the coating tools.
First, an overview is given regarding coating technology for cutting
tool inserts. Testing techniques for coating layers properties, as well
as the various coating defects and their assessment are also surveyed.
Second, it is introduced an experimental approach to examine the
possible coating defects and flaws of worn multicoated carbide
inserts using two important techniques namely scanning electron
microscopy and atomic force microscopy. Finally, it is
recommended a simple procedure for investigating manufacturing
defects and flaws of worn inserts.
Abstract: In this paper, the decomposition-aggregation method
is used to carry out connective stability criteria for general linear
composite system via aggregation. The large scale system is
decomposed into a number of subsystems. By associating directed
graphs with dynamic systems in an essential way, we define the
relation between system structure and stability in the sense of
Lyapunov. The stability criteria is then associated with the stability
and system matrices of subsystems as well as those interconnected
terms among subsystems using the concepts of vector differential
inequalities and vector Lyapunov functions. Then, we show that the
stability of each subsystem and stability of the aggregate model
imply connective stability of the overall system. An example is
reported, showing the efficiency of the proposed technique.
Abstract: Complexity, as a theoretical background has made it
easier to understand and explain the features and dynamic behavior
of various complex systems. As the common theoretical background
has confirmed, borrowing the terminology for design from the
natural sciences has helped to control and understand urban
complexity. Phenomena like self-organization, evolution and
adaptation are appropriate to describe the formerly inaccessible
characteristics of the complex environment in unpredictable bottomup
systems. Increased computing capacity has been a key element in
capturing the chaotic nature of these systems.
A paradigm shift in urban planning and architectural design has
forced us to give up the illusion of total control in urban
environment, and consequently to seek for novel methods for
steering the development. New methods using dynamic modeling
have offered a real option for more thorough understanding of
complexity and urban processes. At best new approaches may renew
the design processes so that we get a better grip on the complex
world via more flexible processes, support urban environmental
diversity and respond to our needs beyond basic welfare by liberating
ourselves from the standardized minimalism.
A complex system and its features are as such beyond human
ethics. Self-organization or evolution is either good or bad. Their
mechanisms are by nature devoid of reason. They are common in
urban dynamics in both natural processes and gas. They are features
of a complex system, and they cannot be prevented. Yet their
dynamics can be studied and supported.
The paradigm of complexity and new design approaches has been
criticized for a lack of humanity and morality, but the ethical
implications of scientific or computational design processes have not
been much discussed. It is important to distinguish the (unexciting)
ethics of the theory and tools from the ethics of computer aided
processes based on ethical decisions. Urban planning and architecture
cannot be based on the survival of the fittest; however, the natural
dynamics of the system cannot be impeded on grounds of being
“non-human".
In this paper the ethical challenges of using the dynamic models
are contemplated in light of a few examples of new architecture and
dynamic urban models and literature. It is suggested that ethical
challenges in computational design processes could be reframed
under the concepts of responsibility and transparency.
Abstract: Applicability of tuning the controller gains for Stewart manipulator using genetic algorithm as an efficient search technique is investigated. Kinematics and dynamics models were introduced in detail for simulation purpose. A PD task space control scheme was used. For demonstrating technique feasibility, a Stewart manipulator numerical-model was built. A genetic algorithm was then employed to search for optimal controller gains. The controller was tested onsite a generic circular mission. The simulation results show that the technique is highly convergent with superior performance operating for different payloads.
Abstract: The use of the oncologic index ISTER allows for a more effective planning of the radiotherapic facilities in the hospitals. Any change in the radiotherapy treatment, due to unexpected stops, may be adapted by recalculating the doses to the new treatment duration while keeping the optimal prognosis. The results obtained in a simulation model on millions of patients allow the definition of optimal success probability algorithms.
Abstract: The paper is devoted to stochastic analysis of finite
dimensional difference equation with dependent on ergodic Markov
chain increments, which are proportional to small parameter ". A
point-form solution of this difference equation may be represented
as vertexes of a time-dependent continuous broken line given on the
segment [0,1] with "-dependent scaling of intervals between vertexes.
Tending " to zero one may apply stochastic averaging and diffusion
approximation procedures and construct continuous approximation of
the initial stochastic iterations as an ordinary or stochastic Ito differential
equation. The paper proves that for sufficiently small " these
equations may be successfully applied not only to approximate finite
number of iterations but also for asymptotic analysis of iterations,
when number of iterations tends to infinity.
Abstract: This paper employs a new approach to regulate the
blood glucose level of type I diabetic patient under an intensive
insulin treatment. The closed-loop control scheme incorporates
expert knowledge about treatment by using reinforcement learning
theory to maintain the normoglycemic average of 80 mg/dl and the
normal condition for free plasma insulin concentration in severe
initial state. The insulin delivery rate is obtained off-line by using Qlearning
algorithm, without requiring an explicit model of the
environment dynamics. The implementation of the insulin delivery
rate, therefore, requires simple function evaluation and minimal
online computations. Controller performance is assessed in terms of
its ability to reject the effect of meal disturbance and to overcome the
variability in the glucose-insulin dynamics from patient to patient.
Computer simulations are used to evaluate the effectiveness of the
proposed technique and to show its superiority in controlling
hyperglycemia over other existing algorithms
Abstract: In recent years there has been renewal of interest in the
relation between Green IT and Cloud Computing. The growing use of
computers in cloud platform has caused marked energy consumption,
putting negative pressure on electricity cost of cloud data center. This
paper proposes an effective mechanism to reduce energy utilization in
cloud computing environments. We present initial work on the
integration of resource and power management that aims at reducing
power consumption. Our mechanism relies on recalling virtualization
services dynamically according to user-s virtualization request and
temporarily shutting down the physical machines after finish in order
to conserve energy. Given the estimated energy consumption, this
proposed effort has the potential to positively impact power
consumption. The results from the experiment concluded that energy
indeed can be saved by powering off the idling physical machines in
cloud platforms.
Abstract: Wheat gluten hydrolyzates (WGHs) and anchovy fine
powder hydrolyzates (AFPHs) were produced at 300 MPa using
combinations of Flavourzyme 500MG (F), Alcalase 2.4L (A),
Marugoto E (M) and Protamex (P), and then were compared to those
produced at ambient pressure concerning the contents of soluble solid
(SS), soluble nitrogen and electrophoretic profiles. The contents of SS
in the WGHs and AFPHs increased up to 87.2% according to the
increase in enzyme number both at high and ambient pressure. Based
on SS content, the optimum enzyme combinations for one-, two-,
three- and four-enzyme hydrolysis were determined as F, FA, FAM
and FAMP, respectively. Similar trends were found for the contents of
total soluble nitrogen (TSN) and TCA-soluble nitrogen (TCASN). The
contents of SS, TSN and TCASN in the hydrolyzates together with
electrophoretic mobility maps indicates that the high-pressure
treatment of this study accelerated protein hydrolysis compared to
ambient-pressure treatment.
Abstract: Mobile agent has motivated the creation of a new
methodology for parallel computing. We introduce a methodology
for the creation of parallel applications on the network. The proposed
Mobile-Agent parallel processing framework uses multiple Javamobile
Agents. Each mobile agent can travel to the specified
machine in the network to perform its tasks. We also introduce the
concept of master agent, which is Java object capable of
implementing a particular task of the target application. Master agent
is dynamically assigns the task to mobile agents. We have developed
and tested a prototype application: Mobile Agent Based Parallel
Computing. Boosted by the inherited benefits of using Java and
Mobile Agents, our proposed methodology breaks the barriers
between the environments, and could potentially exploit in a parallel
manner all the available computational resources on the network.
This paper elaborates performance issues of a mobile agent for
parallel computing.
Abstract: Modeling of a heterogeneous industrial fixed bed
reactor for selective dehydrogenation of heavy paraffin with Pt-Sn-
Al2O3 catalyst has been the subject of current study. By applying
mass balance, momentum balance for appropriate element of reactor
and using pressure drop, rate and deactivation equations, a detailed
model of the reactor has been obtained. Mass balance equations have
been written for five different components. In order to estimate
reactor production by the passage of time, the reactor model which is
a set of partial differential equations, ordinary differential equations
and algebraic equations has been solved numerically.
Paraffins, olefins, dienes, aromatics and hydrogen mole percent as
a function of time and reactor radius have been found by numerical
solution of the model. Results of model have been compared with
industrial reactor data at different operation times. The comparison
successfully confirms validity of proposed model.
Abstract: Arms detection is one of the fundamental problems in
human motion analysis application. The arms are considered as the
most challenging body part to be detected since its pose and speed
varies in image sequences. Moreover, the arms are usually occluded
with other body parts such as the head and torso. In this paper,
histogram-based skin colour segmentation is proposed to detect the
arms in image sequences. Six different colour spaces namely RGB,
rgb, HSI, TSL, SCT and CIELAB are evaluated to determine the best
colour space for this segmentation procedure. The evaluation is
divided into three categories, which are single colour component,
colour without luminance and colour with luminance. The
performance is measured using True Positive (TP) and True Negative
(TN) on 250 images with manual ground truth. The best colour is
selected based on the highest TN value followed by the highest TP
value.
Abstract: The dynamics of User Datagram Protocol (UDP) traffic
over Ethernet between two computers are analyzed using nonlinear
dynamics which shows that there are two clear regimes in the data
flow: free flow and saturated. The two most important variables
affecting this are the packet size and packet flow rate. However,
this transition is due to a transcritical bifurcation rather than phase
transition in models such as in vehicle traffic or theorized large-scale
computer network congestion. It is hoped this model will help lay
the groundwork for further research on the dynamics of networks,
especially computer networks.
Abstract: High speed networks provide realtime variable bit rate
service with diversified traffic flow characteristics and quality
requirements. The variable bit rate traffic has stringent delay and
packet loss requirements. The burstiness of the correlated traffic
makes dynamic buffer management highly desirable to satisfy the
Quality of Service (QoS) requirements. This paper presents an
algorithm for optimization of adaptive buffer allocation scheme for
traffic based on loss of consecutive packets in data-stream and buffer
occupancy level. Buffer is designed to allow the input traffic to be
partitioned into different priority classes and based on the input
traffic behavior it controls the threshold dynamically. This algorithm
allows input packets to enter into buffer if its occupancy level is less
than the threshold value for priority of that packet. The threshold is
dynamically varied in runtime based on packet loss behavior. The
simulation is run for two priority classes of the input traffic –
realtime and non-realtime classes. The simulation results show that
Adaptive Partial Buffer Sharing (ADPBS) has better performance
than Static Partial Buffer Sharing (SPBS) and First In First Out
(FIFO) queue under the same traffic conditions.
Abstract: The increased number of automobiles in recent years
has resulted in great demand for fossil fuel. This has led to the
development of automobile by using alternative fuels which include
gaseous fuels, biofuels and vegetables oils as fuel. Energy from
biomass and more specific bio-diesel is one of the opportunities that
could cover the future demand of fossil fuel shortage. Biomass in the
form of cashew nut shell represents a new energy source and
abundant source of energy in India. The bio-fuel is derived from
cashew nut shell oil and its blend with diesel are promising
alternative fuel for diesel engine. In this work the pyrolysis Cashew
Nut Shell Liquid (CNSL)-Diesel Blends (CDB) was used to run the
Direct Injection (DI) diesel engine. The experiments were conducted
with various blends of CNSL and Diesel namely B20, B40, B60, B80
and B100. The results are compared with neat diesel operation. The
brake thermal efficiency was decreased for blends of CNSL and
Diesel except the lower blends of B20. The brake thermal efficiency
of B20 is nearly closer to that of diesel fuel. Also the emission level
of the all CNSL and Diesel blends was increased compared to neat
diesel. The higher viscosity and lower volatility of CNSL leads to
poor mixture formation and hence lower brake thermal efficiency and
higher emission levels. The higher emission level can be reduced by
adding suitable additives and oxygenates with CNSL and Diesel
blends.
Abstract: A model predictive controller based on recursive learning is proposed. In this SISO adaptive controller, a model is automatically updated using simple recursive equations. The identified models are then stored in the memory to be re-used in the future. The decision for model update is taken based on a new control performance index. The new controller allows the use of simple linear model predictive controllers in the control of nonlinear time varying processes.
Abstract: A satured liquid is warmed until boiling in a parallelepipedic boiler. The heat is supplied in a liquid through the horizontal bottom of the boiler, the other walls being adiabatic. During the process of boiling, the liquid evaporates through its free surface by deforming it. This surface which subdivides the boiler into two regions occupied on both sides by the boiled liquid (broth) and its vapor which surmounts it. The broth occupying the region and its vapor the superior region. A two- fluids model is used to describe the dynamics of the broth, its vapor and their interface. In this model, the broth is treated as a monophasic fluid (homogeneous model) and form with its vapor adiphasic pseudo fluid (two-fluid model). Furthermore, the interface is treated as a zone of mixture characterized by superficial void fraction noted α* . The aim of this article is to describe the dynamics of the interface between the boiled fluid and its vapor within a boiler. The resolution of the problem allowed us to show the evolution of the broth and the level of the liquid.
Abstract: In this paper, we propose an algorithm to compute
initial cluster centers for K-means clustering. Data in a cell is
partitioned using a cutting plane that divides cell in two smaller cells.
The plane is perpendicular to the data axis with the highest variance
and is designed to reduce the sum squared errors of the two cells as
much as possible, while at the same time keep the two cells far apart
as possible. Cells are partitioned one at a time until the number of
cells equals to the predefined number of clusters, K. The centers of
the K cells become the initial cluster centers for K-means. The
experimental results suggest that the proposed algorithm is effective,
converge to better clustering results than those of the random
initialization method. The research also indicated the proposed
algorithm would greatly improve the likelihood of every cluster
containing some data in it.