Abstract: A key aspect of the design of any software system is
its architecture. An architecture description provides a formal model
of the architecture in terms of components and connectors and how
they are composed together. COSA (Component-Object based
Software Structures), is based on object-oriented modeling and
component-based modeling. The model improves the reusability by
increasing extensibility, evolvability, and compositionality of the
software systems. This paper presents the COSA modelling tool
which help architects the possibility to verify the structural coherence
of a given system and to validate its semantics with COSA approach.
Abstract: The paper focuses on the area of context modeling with respect to the specification of context-aware systems supporting ubiquitous applications. The proposed approach, followed within the SIMPLICITY IST project, uses a high-level system ontology to derive context models for system components which consequently are mapped to the system's physical entities. For the definition of user and device-related context models in particular, the paper suggests a standard-based process consisting of an analysis phase using the Common Information Model (CIM) methodology followed by an implementation phase that defines 3GPP based components. The benefits of this approach are further depicted by preliminary examples of XML grammars defining profiles and components, component instances, coupled with descriptions of respective ubiquitous applications.
Abstract: When binary decision diagrams are formed from
uniformly distributed Monte Carlo data for a large number of
variables, the complexity of the decision diagrams exhibits a
predictable relationship to the number of variables and minterms. In
the present work, a neural network model has been used to analyze the
pattern of shortest path length for larger number of Monte Carlo data
points. The neural model shows a strong descriptive power for the
ISCAS benchmark data with an RMS error of 0.102 for the shortest
path length complexity. Therefore, the model can be considered as a
method of predicting path length complexities; this is expected to lead
to minimum time complexity of very large-scale integrated circuitries
and related computer-aided design tools that use binary decision
diagrams.
Abstract: This paper discusses the causal explanation capability
of QRIOM, a tool aimed at supporting learning of organic chemistry
reactions. The development of the tool is based on the hybrid use of
Qualitative Reasoning (QR) technique and Qualitative Process
Theory (QPT) ontology. Our simulation combines symbolic,
qualitative description of relations with quantity analysis to generate
causal graphs. The pedagogy embedded in the simulator is to both
simulate and explain organic reactions. Qualitative reasoning through
a causal chain will be presented to explain the overall changes made
on the substrate; from initial substrate until the production of final
outputs. Several uses of the QPT modeling constructs in supporting
behavioral and causal explanation during run-time will also be
demonstrated. Explaining organic reactions through causal graph
trace can help improve the reasoning ability of learners in that their
conceptual understanding of the subject is nurtured.
Abstract: This paper presents an interactive modeling system of
uniform polyhedra using the isomorphic graphs. Especially,
Kepler-Poinsot solids are formed by modifications of dodecahedron
and icosahedron.
Abstract: Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Abstract: To achieve accurate and precise results of finite
element analysis (FEA) of bones, it is important to represent the
load/boundary conditions as identical as possible to the human body
such as the bone properties, the type and force of the muscles, the
contact force of the joints, and the location of the muscle attachment.
In this study, the difference in the Von-Mises stress and the total
deformation was compared by classifying them into Case 1, which
shows the actual anatomical form of the muscle attached to the femur
when the same muscle force was applied, and Case 2, which gives a
simplified representation of the attached location. An inverse
dynamical musculoskeletal model was simulated using data from an
actual walking experiment to complement the accuracy of the
muscular force, the input value of FEA. The FEA method using the
results of the muscular force that were calculated through the
simulation showed that the maximum Von-Mises stress and the
maximum total deformation in Case 2 were underestimated by 8.42%
and 6.29%, respectively, compared to Case 1. The torsion energy and
bending moment at each location of the femur occurred via the stress
ingredient. Due to the geometrical/morphological feature of the femur
of having a long bone shape when the stress distribution is wide, as
shown in Case 1, a greater Von-Mises stress and total deformation are
expected from the sum of the stress ingredients. More accurate results
can be achieved only when the muscular strength and the attachment
location in the FEA of the bones and the attachment form are the same
as those in the actual anatomical condition under the various moving
conditions of the human body.
Abstract: Human perceives color in categories, which may be
identified using color name such as red, blue, etc. The categorization
is unique for each human being. However despite the individual
differences, the categorization is shared among members in society.
This allows communication among them, especially when using
color name. Sociable robot, to live coexist with human and become
part of human society, must also have the shared color
categorization, which can be achieved through learning. Many
works have been done to enable computer, as brain of robot, to learn
color categorization. Most of them rely on modeling of human color
perception and mathematical complexities. Differently, in this work,
the computer learns color categorization through interaction with
humans. This work aims at developing the innate ability of the
computer to learn the human-like color categorization. It focuses on
the representation of color categorization and how it is built and
developed without much mathematical complexity.
Abstract: This work offers a study of new simple compact model
of dual-drain Magnetic Field Effect Transistor (MAGFET) including
geometrical effects and biasing dependency. An explanation of the
sensitivity is investigated, involving carrier deflection as the dominant
operating principle. Finally, model verification with simulation results
is introduced to ensure that acceptable error of 2% is achieved.
Abstract: Ferroresonance is an electrical phenomenon in
nonlinear character, which frequently occurs in power system due to
transmission line faults and single or more-phase switching on the
lines as well as usage of the saturable transformers. In this study, the
ferroresonance phenomena are investigated under the modeling of the
West Anatolian Electric Power Network of 380 kV in Turkey. The
ferroresonance event is observed as a result of removing the loads at
the end of the lines. In this sense, two different cases are considered.
At first, the switching is applied at 2nd second and the ferroresonance
affects are observed between 2nd and 4th seconds in the voltage
variations of the phase-R. Hence the ferroresonance and nonferroresonance
parts of the overall data are compared with each
others using the Fourier transform techniques to show the
ferroresonance affects.
Abstract: Spatial and mobile computing evolves. This paper
describes a smart modeling platform called “GeoSEMA". This
approach tends to model multidimensional GeoSpatial Evolutionary
and Mobile Agents. Instead of 3D and location-based issues, there
are some other dimensions that may characterize spatial agents, e.g.
discrete-continuous time, agent behaviors. GeoSEMA is seen as a
devoted design pattern motivating temporal geographic-based
applications; it is a firm foundation for multipurpose and
multidimensional special-based applications. It deals with
multipurpose smart objects (buildings, shapes, missiles, etc.) by
stimulating geospatial agents.
Formally, GeoSEMA refers to geospatial, spatio-evolutive and
mobile space constituents where a conceptual geospatial space model
is given in this paper. In addition to modeling and categorizing
geospatial agents, the model incorporates the concept of inter-agents
event-based protocols. Finally, a rapid software-architecture
prototyping GeoSEMA platform is also given. It will be
implemented/ validated in the next phase of our work.
Abstract: Many methods exist for either measuring or estimating
evaporation from free water surfaces. Evaporation pans provide one
of the simplest, inexpensive, and most widely used methods of
estimating evaporative losses. In this study, the rate of evaporation
starting from a water surface was calculated by modeling with
application to dams in wet, arid and semi arid areas in Algeria.
We calculate the evaporation rate from the pan using the energy
budget equation, which offers the advantage of an ease of use, but
our results do not agree completely with the measurements taken by
the National Agency of areas carried out using dams located in areas
of different climates. For that, we develop a mathematical model to
simulate evaporation. This simulation uses an energy budget on the
level of a vat of measurement and a Computational Fluid Dynamics
(Fluent). Our calculation of evaporation rate is compared then by the
two methods and with the measures of areas in situ.
Abstract: This article presents the evolution and technological changes implemented on the full scale simulators developed by the Simulation Department of the Instituto de Investigaciones Eléctricas1 (Mexican Electric Research Institute) and located at different training centers around the Mexican territory, and allows US to know the last updates, basically from the input/output view point, of the current simulators at some facilities of the electrical sector as well as the compatible industry of the electrical manufactures and industries such as Comision Federal de Electricidad (CFE*, The utility Mexican company). Tendencies of these developments and impact within the operators- scope are also presented.
Abstract: In this paper, first we introduce the stable distribution, stable process and theirs characteristics. The a -stable distribution family has received great interest in the last decade due to its success in modeling data, which are too impulsive to be accommodated by the Gaussian distribution. In the second part, we propose major applications of alpha stable distribution in telecommunication, computer science such as network delays and signal processing and financial markets. At the end, we focus on using stable distribution to estimate measure of risk in stock markets and show simulated data with statistical softwares.
Abstract: Various formal and informal brand alliances are being formed in professional service firms. Professional service corporate brand is heavily dependent on brands of professional employees who comprise them, and professional employee brands are in turn dependent on the corporate brand. Prior work provides limited scientific evidence of brand alliance effects in professional service area – i.e., how professional service corporate-employee brand allies are affected by an alliance, what are brand attitude effects after alliance formation and how these effects vary with different strengths of an ally. Scientific literature analysis and theoretical modeling are the main methods of the current study. As a result, a theoretical model is constructed for estimating spillover effects of professional service corporate-employee brand alliances and for comparison among different professional service firm expertise practice models – from “brains" to “procedure" model. The resulting theoretical model lays basis for future experimental studies.
Abstract: Computer modeling has played a unique role in
understanding electrocardiography. Modeling and simulating cardiac
action potential propagation is suitable for studying normal and
pathological cardiac activation. This paper presents a 2-D Cellular
Automata model for simulating action potential propagation in
cardiac tissue. We demonstrate a novel algorithm in order to use
minimum neighbors. This algorithm uses the summation of the
excitability attributes of excited neighboring cells. We try to
eliminate flat edges in the result patterns by inserting probability to
the model. We also preserve the real shape of action potential by
using linear curve fitting of one well known electrophysiological
model.
Abstract: The aim of this contribution is to present a new
approach in modeling the electrical activity of the human heart. A
recurrent artificial neural network is being used in order to exhibit a
subset of the dynamics of the electrical behavior of the human heart.
The proposed model can also be used, when integrated, as a
diagnostic tool of the human heart system.
What makes this approach unique is the fact that every model is
being developed from physiological measurements of an individual.
This kind of approach is very difficult to apply successfully in many
modeling problems, because of the complexity and entropy of the
free variables describing the complex system. Differences between
the modeled variables and the variables of an individual, measured at
specific moments, can be used for diagnostic purposes. The sensor
fusion used in order to optimize the utilization of biomedical sensors
is another point that this paper focuses on. Sensor fusion has been
known for its advantages in applications such as control and
diagnostics of mechanical and chemical processes.
Abstract: In this paper; we are interested principally in dynamic modelling of quadrotor while taking into account the high-order nonholonomic constraints in order to develop a new control scheme as well as the various physical phenomena, which can influence the dynamics of a flying structure. These permit us to introduce a new state-space representation. After, the use of Backstepping approach for the synthesis of tracking errors and Lyapunov functions, a sliding mode controller is developed in order to ensure Lyapunov stability, the handling of all system nonlinearities and desired tracking trajectories. Finally simulation results are also provided in order to illustrate the performances of the proposed controller.
Abstract: A reduced order modeling approach for natural
gas transient flow in pipelines is presented. The Euler
equations are considered as the governing equations and
solved numerically using the implicit Steger-Warming flux
vector splitting method. Next, the linearized form of the
equations is derived and the corresponding eigensystem is
obtained. Then, a few dominant flow eigenmodes are used to
construct an efficient reduced-order model. A well-known test
case is presented to demonstrate the accuracy and the
computational efficiency of the proposed method. The results
obtained are in good agreement with those of the direct
numerical method and field data. Moreover, it is shown that
the present reduced-order model is more efficient than the
conventional numerical techniques for transient flow analysis
of natural gas in pipelines.
Abstract: Various methods of geofield parameters restoration (by algebraic polynoms; filters; rational fractions; interpolation splines; geostatistical methods – kriging; search methods of nearest points – inverse distance, minimum curvature, local – polynomial interpolation; neural networks) have been analyzed and some possible mistakes arising during geofield surface modeling have been presented.