Abstract: A straightforward and intuitive combination of single simulations into an aggregated master-simulation is not trivial. There are lots of problems, which trigger-specific difficulties during the modeling and execution of such a simulation. In this paper we identify these problems and aim to solve them by mapping the task to the field of multi agent systems. The solution is a new meta-model named AGENTMAP, which is able to mitigate most of the problems and to support intuitive modeling at the same time. This meta-model will be introduced and explained on basis of an example from the e-commerce domain.
Abstract: Existing work in temporal logic on representing the
execution of infinitely many transactions, uses linear-time temporal
logic (LTL) and only models two-step transactions. In this paper,
we use the comparatively efficient branching-time computational tree
logic CTL and extend the transaction model to a class of multistep
transactions, by introducing distinguished propositional variables
to represent the read and write steps of n multi-step transactions
accessing m data items infinitely many times. We prove that the
well known correspondence between acyclicity of conflict graphs
and serializability for finite schedules, extends to infinite schedules.
Furthermore, in the case of transactions accessing the same set of
data items in (possibly) different orders, serializability corresponds
to the absence of cycles of length two. This result is used to give an
efficient encoding of the serializability condition into CTL.
Abstract: This paper proposes, implements and evaluates an original discretization method for continuous random variables, in order to estimate the reliability of systems for which stress and strength are defined as complex functions, and whose reliability is not derivable through analytic techniques. This method is compared to other two discretizing approaches appeared in literature, also through a comparative study involving four engineering applications. The results show that the proposal is very efficient in terms of closeness of the estimates to the true (simulated) reliability. In the study we analyzed both a normal and a non-normal distribution for the random variables: this method is theoretically suitable for each parametric family.
Abstract: To date, theoretical studies concerning the Carbon
Fiber Reinforced Polymer (CFRP) strengthening of RC beams with
openings have been rather limited. In addition, various numerical
analyses presented so far have effectively simulated the behaviour of
solid beam strengthened by FRP material. In this paper, a two
dimensional nonlinear finite element analysis is presented to validate
against the laboratory test results of six RC beams. All beams had the
same rectangular cross-section geometry and were loaded under four
point bending. The crack pattern results of the finite element model
show good agreement with the crack pattern of the experimental
beams. The load midspan deflection curves of the finite element
models exhibited a stiffer result compared to the experimental beams.
The possible reason may be due to the perfect bond assumption used
between the concrete and steel reinforcement.
Abstract: In this paper we study a food chain model with three trophic levels and Michaelis-Menten type ratio-dependent functional response. Distinctive feature of this model is the sensitive dependence of the dynamical behavior on the initial populations and parameters of the real world. The stability of the equilibrium points are also investigated.
Abstract: A numerical analysis of wave and hydrodynamic models
is used to investigate the influence of WAve and Storm Surge
(WASS) in the regional and coastal zones. The numerical analyzed
system consists of the WAve Model Cycle 4 (WAMC4) and the
Princeton Ocean Model (POM) which used to solve the energy
balance and primitive equations respectively. The results of both
models presented the incorporated surface wave in the regional
zone affected the coastal storm surge zone. Specifically, the results
indicated that the WASS generally under the approximation is not
only the peak surge but also the coastal water level drop which
can also cause substantial impact on the coastal environment. The
wave–induced surface stress affected the storm surge can significantly
improve storm surge prediction. Finally, the calibration of wave
module according to the minimum error of the significant wave height
(Hs) is not necessarily result in the optimum wave module in the
WASS analyzed system for the WASS prediction.
Abstract: Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Abstract: This paper presents a formant-tracking linear prediction
(FTLP) model for speech processing in noise. The main focus of this
work is the detection of formant trajectory based on Hidden Markov
Models (HMM), for improved formant estimation in noise. The
approach proposed in this paper provides a systematic framework for
modelling and utilization of a time- sequence of peaks which satisfies
continuity constraints on parameter; the within peaks are modelled
by the LP parameters. The formant tracking LP model estimation
is composed of three stages: (1) a pre-cleaning multi-band spectral
subtraction stage to reduce the effect of residue noise on formants
(2) estimation stage where an initial estimate of the LP model of
speech for each frame is obtained (3) a formant classification using
probability models of formants and Viterbi-decoders. The evaluation
results for the estimation of the formant tracking LP model tested
in Gaussian white noise background, demonstrate that the proposed
combination of the initial noise reduction stage with formant tracking
and LPC variable order analysis, results in a significant reduction in
errors and distortions. The performance was evaluated with noisy
natual vowels extracted from international french and English vocabulary
speech signals at SNR value of 10dB. In each case, the
estimated formants are compared to reference formants.
Abstract: The Bangalore City is facing the acute problem of
pollution in the atmosphere due to the heavy increase in the traffic
and developmental activities in recent years. The present study is an
attempt in the direction to assess trend of the ambient air quality
status of three stations, viz., AMCO Batteries Factory, Mysore Road,
GRAPHITE INDIA FACTORY, KHB Industrial Area, Whitefield
and Ananda Rao Circle, Gandhinagar with respect to some of the
major criteria pollutants such as Total Suspended particular matter
(SPM), Oxides of nitrogen (NOx), and Oxides of sulphur (SO2). The
sites are representative of various kinds of growths viz., commercial,
residential and industrial, prevailing in Bangalore, which are
contributing to air pollution. The concentration of Sulphur Dioxide
(SO2) at all locations showed a falling trend due to use of refined
petrol and diesel in the recent years. The concentration of Oxides of
nitrogen (NOx) showed an increasing trend but was within the
permissible limits. The concentration of the Suspended particular
matter (SPM) showed the mixed trend. The correlation between
model and observed values is found to vary from 0.4 to 0.7 for SO2,
0.45 to 0.65 for NOx and 0.4 to 0.6 for SPM. About 80% of data is
observed to fall within the error band of ±50%. Forecast test for the
best fit models showed the same trend as actual values in most of the
cases. However, the deviation observed in few cases could be
attributed to change in quality of petro products, increase in the
volume of traffic, introduction of LPG as fuel in many types of
automobiles, poor condition of roads, prevailing meteorological
conditions, etc.
Abstract: Complexity, as a theoretical background has made it
easier to understand and explain the features and dynamic behavior
of various complex systems. As the common theoretical background
has confirmed, borrowing the terminology for design from the
natural sciences has helped to control and understand urban
complexity. Phenomena like self-organization, evolution and
adaptation are appropriate to describe the formerly inaccessible
characteristics of the complex environment in unpredictable bottomup
systems. Increased computing capacity has been a key element in
capturing the chaotic nature of these systems.
A paradigm shift in urban planning and architectural design has
forced us to give up the illusion of total control in urban
environment, and consequently to seek for novel methods for
steering the development. New methods using dynamic modeling
have offered a real option for more thorough understanding of
complexity and urban processes. At best new approaches may renew
the design processes so that we get a better grip on the complex
world via more flexible processes, support urban environmental
diversity and respond to our needs beyond basic welfare by liberating
ourselves from the standardized minimalism.
A complex system and its features are as such beyond human
ethics. Self-organization or evolution is either good or bad. Their
mechanisms are by nature devoid of reason. They are common in
urban dynamics in both natural processes and gas. They are features
of a complex system, and they cannot be prevented. Yet their
dynamics can be studied and supported.
The paradigm of complexity and new design approaches has been
criticized for a lack of humanity and morality, but the ethical
implications of scientific or computational design processes have not
been much discussed. It is important to distinguish the (unexciting)
ethics of the theory and tools from the ethics of computer aided
processes based on ethical decisions. Urban planning and architecture
cannot be based on the survival of the fittest; however, the natural
dynamics of the system cannot be impeded on grounds of being
“non-human".
In this paper the ethical challenges of using the dynamic models
are contemplated in light of a few examples of new architecture and
dynamic urban models and literature. It is suggested that ethical
challenges in computational design processes could be reframed
under the concepts of responsibility and transparency.
Abstract: Applicability of tuning the controller gains for Stewart manipulator using genetic algorithm as an efficient search technique is investigated. Kinematics and dynamics models were introduced in detail for simulation purpose. A PD task space control scheme was used. For demonstrating technique feasibility, a Stewart manipulator numerical-model was built. A genetic algorithm was then employed to search for optimal controller gains. The controller was tested onsite a generic circular mission. The simulation results show that the technique is highly convergent with superior performance operating for different payloads.
Abstract: Multi-agent system approach has proven to be an effective and appropriate abstraction level to construct whole models of a diversity of biological problems, integrating aspects which can be found both in "micro" and "macro" approaches when modeling this type of phenomena. Taking into account these considerations, this paper presents the important computational characteristics to be gathered into a novel bioinformatics framework built upon a multiagent architecture. The version of the tool presented herein allows studying and exploring complex problems belonging principally to structural biology, such as protein folding. The bioinformatics framework is used as a virtual laboratory to explore a minimalist model of protein folding as a test case. In order to show the laboratory concept of the platform as well as its flexibility and adaptability, we studied the folding of two particular sequences, one of 45-mer and another of 64-mer, both described by an HP model (only hydrophobic and polar residues) and coarse grained 2D-square lattice. According to the discussion section of this piece of work, these two sequences were chosen as breaking points towards the platform, in order to determine the tools to be created or improved in such a way to overcome the needs of a particular computation and analysis of a given tough sequence. The backwards philosophy herein is that the continuous studying of sequences provides itself important points to be added into the platform, to any time improve its efficiency, as is demonstrated herein.
Abstract: Mathematical, graphical and intuitive models are often
constructed in the development process of computational systems.
The Unified Modeling Language (UML) is one of the most popular
modeling languages used by practicing software engineers. This
paper critically examines UML models and suggests an augmented
use case view with the addition of new constructs for modeling
software. It also shows how a use case diagram can be enhanced. The
improved modeling constructs are presented with examples for
clarifying important design and implementation issues.
Abstract: This study aims to demonstrate the quantification of
peptides based on isotope dilution surface enhanced Raman
scattering (IDSERS). SERS spectra of phenylalanine (Phe), leucine
(Leu) and two peptide sequences TGQIFK (T13) and
YSFLQNPQTSLCFSESIPTPSNR (T6) as part of the 22-kDa
human growth hormone (hGH) were obtained on Ag-nanoparticle
covered substrates. On the basis of the dominant Phe and Leu
vibrational modes, precise partial least squares (PLS) prediction
models were built enabling the determination of unknown T13 and
T6 concentrations. Detection of hGH in its physiological
concentration in order to investigate the possibility of protein
quantification has been achieved.
Abstract: The necessity of updating the numerical models inputs, because of geometrical and resistive variations in rivers subject to solid transport phenomena, requires detailed control and monitoring activities. The human employment and financial resources of these activities moves the research towards the development of expeditive methodologies, able to evaluate the outflows through the measurement of more easily acquirable sizes. Recent studies highlighted the dependence of the entropic parameter on the kinematical and geometrical flow conditions. They showed a meaningful variability according to the section shape, dimension and slope. Such dependences, even if not yet well defined, could reduce the difficulties during the field activities, and also the data elaboration time. On the basis of such evidences, the relationships between the entropic parameter and the geometrical and resistive sizes, obtained through a large and detailed laboratory experience on steady free surface flows in conditions of macro and intermediate homogeneous roughness, are analyzed and discussed.
Abstract: The aim of this paper is to present a methodology in
three steps to forecast supply chain demand. In first step, various data
mining techniques are applied in order to prepare data for entering
into forecasting models. In second step, the modeling step, an
artificial neural network and support vector machine is presented
after defining Mean Absolute Percentage Error index for measuring
error. The structure of artificial neural network is selected based on
previous researchers' results and in this article the accuracy of
network is increased by using sensitivity analysis. The best forecast
for classical forecasting methods (Moving Average, Exponential
Smoothing, and Exponential Smoothing with Trend) is resulted based
on prepared data and this forecast is compared with result of support
vector machine and proposed artificial neural network. The results
show that artificial neural network can forecast more precisely in
comparison with other methods. Finally, forecasting methods'
stability is analyzed by using raw data and even the effectiveness of
clustering analysis is measured.
Abstract: Due to the increasing and varying risks that economic units face with, derivative instruments gain substantial importance, and trading volumes of derivatives have reached very significant level. Parallel with these high trading volumes, researchers have developed many different models. Some are parametric, some are nonparametric. In this study, the aim is to analyse the success of artificial neural network in pricing of options with S&P 100 index options data. Generally, the previous studies cover the data of European type call options. This study includes not only European call option but also American call and put options and European put options. Three data sets are used to perform three different ANN models. One only includes data that are directly observed from the economic environment, i.e. strike price, spot price, interest rate, maturity, type of the contract. The others include an extra input that is not an observable data but a parameter, i.e. volatility. With these detail data, the performance of ANN in put/call dimension, American/European dimension, moneyness dimension is analyzed and whether the contribution of the volatility in neural network analysis make improvement in prediction performance or not is examined. The most striking results revealed by the study is that ANN shows better performance when pricing call options compared to put options; and the use of volatility parameter as an input does not improve the performance.
Abstract: Synthesis gas manufacturing by steam reforming of hydrocarbons is an important industrial process. High endothermic nature of the process makes it one of the most cost and heat intensive processes. In the present work, composite effect of different inert gases on synthesis gas yield, feed gas conversion and temperature distribution along the reactor length has been studied using a heterogeneous model. Mathematical model was developed as a first stage and validated against the existing process models. With the addition of inert gases, a higher yield of synthesis gas is observed. Simultaneously the rector outlet temperature drops to as low as 810 K. It was found that Xenon gives the highest yield and conversion while Helium gives the lowest temperature. Using Xenon inert gas 20 percent reduction in outlet temperature was observed compared to traditional case.
Abstract: This paper presents two simplified models to
determine nodal voltages in power distribution networks. These
models allow estimating the impact of the installation of reactive
power compensations equipments like fixed or switched capacitor
banks. The procedure used to develop the models is similar to the
procedure used to develop linear power flow models of transmission
lines, which have been widely used in optimization problems of
operation planning and system expansion. The steady state non-linear
load flow equations are approximated by linear equations relating the
voltage amplitude and currents. The approximations of the linear
equations are based on the high relationship between line resistance
and line reactance (ratio R/X), which is valid for power distribution
networks. The performance and accuracy of the models are evaluated
through comparisons with the exact results obtained from the
solution of the load flow using two test networks: a hypothetical
network with 23 nodes and a real network with 217 nodes.
Abstract: The dynamics of User Datagram Protocol (UDP) traffic
over Ethernet between two computers are analyzed using nonlinear
dynamics which shows that there are two clear regimes in the data
flow: free flow and saturated. The two most important variables
affecting this are the packet size and packet flow rate. However,
this transition is due to a transcritical bifurcation rather than phase
transition in models such as in vehicle traffic or theorized large-scale
computer network congestion. It is hoped this model will help lay
the groundwork for further research on the dynamics of networks,
especially computer networks.