Abstract: Chemical reaction and diffusion are important phenomena in quantitative neurobiology and biophysics. The knowledge of the dynamics of calcium Ca2+ is very important in cellular physiology because Ca2+ binds to many proteins and regulates their activity and interactions Calcium waves propagate inside cells due to a regenerative mechanism known as calcium-induced calcium release. Buffer-mediated calcium diffusion in the cytosol plays a crucial role in the process. A mathematical model has been developed for calcium waves by assuming the buffers are in equilibrium with calcium i.e., the rapid buffering approximation for a one dimensional unsteady state case. This model incorporates important physical and physiological parameters like dissociation rate, diffusion rate, total buffer concentration and influx. The finite difference method has been employed to predict [Ca2+] and buffer concentration time course regardless of the calcium influx. The comparative studies of the effect of the rapid buffered diffusion and kinetic parameters of the model on the concentration time course have been performed.
Abstract: The amount of urban artificial heat which affects the
urban temperature rise in urban meteorology was investigated in order
to clarify the relationships between urbanization and urban
meteorology in this study.
The results of calculation to identify how urban temperate was
increased through the establishment of a model for measuring the
amount of urban artificial heat and theoretical testing revealed that the
amount of urban artificial heat increased urban temperature by plus or
minus 0.23 ˚ C in 2007 compared with 1996, statistical methods
(correlation and regression analysis) to clarify the relationships
between urbanization and urban weather were as follows.
New design techniques and urban growth management are
necessary from urban growth management point of view suggested
from this research at city design phase to decrease urban temperature
rise and urban torrential rain which can produce urban disaster in terms
of urban meteorology by urbanization.
Abstract: The contribution deals with analysis of identity style
at adolescents (N=463) at the age from 16 to 19 (the average age is
17,7 years). We used the Identity Style Inventory by Berzonsky,
distinguishing three basic, measured identity styles: informational,
normative, diffuse-avoidant identity style and also commitment. The
informational identity style influencing on personal adaptability,
coping strategies, quality of life and the normative identity style, it
means the style in which an individual takes on models of authorities
at self-defining were found to have the highest representation in the
studied group of adolescents by higher scores at girls in comparison
with boys. The normative identity style positively correlates with the
informational identity style. The diffuse-avoidant identity style was
found to be positively associated with maladaptive decisional
strategies, neuroticism and depressive reactions. There is the style,
in which the individual shifts aside defining his personality. In our
research sample the lowest score represents it and negatively
correlates with commitment, it means with coping strategies, thrust in
oneself and the surrounding world. The age of adolescents did not
significantly differentiate representation of identity style. We were
finding the model, in which informational and normative identity
style had positive relationship and the informational and diffuseavoidant
style had negative relationship, which were determinated
with commitment. In the same time the commitment is influenced
with other outside factors.
Abstract: Facial features are frequently used to represent local
properties of a human face image in computer vision applications. In
this paper, we present a fast algorithm that can extract the facial
features online such that they can give a satisfying representation of a
face image. It includes one step for a coarse detection of each facial
feature by AdaBoost and another one to increase the accuracy of the
found points by Active Shape Models (ASM) in the regions of interest.
The resulted facial features are evaluated by matching with artificial
face models in the applications of physiognomy. The distance measure
between the features and those in the fate models from the database is
carried out by means of the Hausdorff distance. In the experiment, the
proposed method shows the efficient performance in facial feature
extractions and online system of physiognomy.
Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified as a
CIM metamodel level mapping to a highly expressive subset of DLs
capable of capturing all the semantics of the models. The paper shows
how the proposed mapping can be used for automatic reasoning
about the management information models, as a design aid, by means
of new-generation CASE tools, thanks to the use of state-of-the-art
automatic reasoning systems that support the proposed logic and use
algorithms that are sound and complete with respect to the semantics.
Such a CASE tool framework has been developed by the authors and
its architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Abstract: This paper presents the study of hardness profile of spur gear heated by induction heating process in function of the machine parameters, such as the power (kW), the heating time (s) and the generator frequency (kHz). The global work is realized by 3D finite-element simulation applied to the process by coupling and resolving the electromagnetic field and the heat transfer problems, and it was performed in three distinguished steps. First, a Comsol 3D model was built using an adequate formulation and taking into account the material properties and the machine parameters. Second, the convergence study was conducted to optimize the mesh. Then, the surface temperatures and the case depths were deeply analyzed in function of the initial current density and the heating time in medium frequency (MF) and high frequency (HF) heating modes and the edge effect were studied. Finally, the simulations results are validated using experimental tests.
Abstract: The UK Government has emphasized the role of Local Authorities as a key player in its flagship residential energy efficiency strategies, by identifying and targeting areas for energy efficiency improvements. Residential energy consumption in England is characterized by significant geographical variation in energy demand, which makes centralized targeting of areas for energy efficiency intervention difficult. This paper draws on research which aims to understand how demographic, social, economic, urban form and climatic factors influence the geographical variations in English residential gas consumption. The paper reports the findings of a multiple regression model that shows how 64% of the geographical variation in residential gas consumption is accounted for by variations in these factors. Results from this study, after further refinement and validation, can be used by Local Authorities to identify areas within their boundaries that have higher than expected gas consumption, these may be prime targets for energy efficiency initiatives.
Abstract: Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. In this paper both PSO and GA optimization are employed for finding stable reduced order models of single-input- single-output large-scale linear systems. Both the techniques guarantee stability of reduced order model if the original high order model is stable. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example from literature and the results are compared with recently published conventional model reduction technique.
Abstract: Above Elbow Prosthesis is one of the most commonly
amputated or missing limbs. The research is done for modelling
techniques of upper limb prosthesis and design of high torque, light
weight and compact in size elbow actuator. The purposed actuator
consists of a DC motor, planetary gear set and a harmonic drive. The
calculations show that the actuator is good enough to be used in real
life powered prosthetic upper limb or rehabilitation exoskeleton.
Abstract: In this paper, transversal vibration of buried pipelines
during loading induced by underground explosions is analyzed. The
pipeline is modeled as an infinite beam on an elastic foundation, so
that soil-structure interaction is considered by means of transverse
linear springs along the pipeline. The pipeline behavior is assumed to
be ideal elasto-plastic which an ultimate strain value limits the plastic
behavior. The blast loading is considered as a point load, considering
the affected length at some point of the pipeline, in which the
magnitude decreases exponentially with time. A closed-form solution
for the quasi-static problem is carried out for both elastic and elasticperfect
plastic behaviors of pipe materials. At the end, a comparative
study on steel and polyethylene pipes with different sizes buried in
various soil conditions, affected by a predefined underground
explosion is conducted, in which effect of each parameter is
discussed.
Abstract: Parallel programming models exist as an abstraction
of hardware and memory architectures. There are several parallel
programming models in commonly use; they are shared memory
model, thread model, message passing model, data parallel model,
hybrid model, Flynn-s models, embarrassingly parallel computations
model, pipelined computations model. These models are not specific
to a particular type of machine or memory architecture. This paper
expresses the model program for concurrent approach to data parallel
model through java programming.
Abstract: This study performs a comparative analysis of the 21 Greek Universities in terms of their public funding, awarded for covering their operating expenditure. First it introduces a DEA/MCDM model that allocates the fund into four expenditure factors in the most favorable way for each university. Then, it presents a common, consensual assessment model to reallocate the amounts, remaining in the same level of total public budget. From the analysis it derives that a number of universities cannot justify the public funding in terms of their size and operational workload. For them, the sufficient reduction of their public funding amount is estimated as a future target. Due to the lack of precise data for a number of expenditure criteria, the analysis is based on a mixed crisp-ordinal data set.
Abstract: This paper proposes a method, combining color and
layout features, for identifying documents captured from lowresolution
handheld devices. On one hand, the document image color
density surface is estimated and represented with an equivalent
ellipse and on the other hand, the document shallow layout structure
is computed and hierarchically represented. The combined color and
layout features are arranged in a symbolic file, which is unique for
each document and is called the document-s visual signature. Our
identification method first uses the color information in the
signatures in order to focus the search space on documents having a
similar color distribution, and finally selects the document having the
most similar layout structure in the remaining search space. Finally,
our experiment considers slide documents, which are often captured
using handheld devices.
Abstract: This paper presents a simplified version of Data Envelopment Analysis (DEA) - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. DEA with a Perfect Object (DEA PO) augments the group of actual objects with a virtual Perfect Object - the one having greatest outputs and smallest inputs. It allows for obtaining an explicit analytical solution and making a step to an absolute efficiency. This paper develops this approach further and introduces a DEA model with Partially Perfect Objects. DEA PPO consecutively eliminates the smallest relative inputs or greatest relative outputs, and applies DEA PO to the reduced collections of indicators. The partial efficiency scores are combined to get the weighted efficiency score. The computational scheme remains simple, like that of DEA PO, but the advantage of the DEA PPO is taking into account all of the inputs and outputs for each actual object. Firm evaluation is considered as an example.
Abstract: Blood pulse is an important human physiological signal commonly used for the understanding of the individual physical health. Current methods of non-invasive blood pulse sensing require direct contact or access to the human skin. As such, the performances of these devices tend to vary with time and are subjective to human body fluids (e.g. blood, perspiration and skin-oil) and environmental contaminants (e.g. mud, water, etc). This paper proposes a simulation model for the novel method of non-invasive acquisition of blood pulse using the disturbance created by blood flowing through a localized magnetic field. The simulation model geometry represents a blood vessel, a permanent magnet, a magnetic sensor, surrounding tissues and air in 2-dimensional. In this model, the velocity and pressure fields in the blood stream are described based on Navier-Stroke equations and the walls of the blood vessel are assumed to have no-slip condition. The blood assumes a parabolic profile considering a laminar flow for blood in major artery near the skin. And the inlet velocity follows a sinusoidal equation. This will allow the computational software to compute the interactions between the magnetic vector potential generated by the permanent magnet and the magnetic nanoparticles in the blood. These interactions are simulated based on Maxwell equations at the location where the magnetic sensor is placed. The simulated magnetic field at the sensor location is found to assume similar sinusoidal waveform characteristics as the inlet velocity of the blood. The amplitude of the simulated waveforms at the sensor location are compared with physical measurements on human subjects and found to be highly correlated.
Abstract: The objective of this paper is to estimate realistic
principal extrusion process parameters by means of artificial neural
network. Conventionally, finite element analysis is used to derive
process parameters. However, the finite element analysis of the
extrusion model does not consider the manufacturing process
constraints in its modeling. Therefore, the process parameters
obtained through such an analysis remains highly theoretical.
Alternatively, process development in industrial extrusion is to a
great extent based on trial and error and often involves full-size
experiments, which are both expensive and time-consuming. The
artificial neural network-based estimation of the extrusion process
parameters prior to plant execution helps to make the actual extrusion
operation more efficient because more realistic parameters may be
obtained. And so, it bridges the gap between simulation and real
manufacturing execution system. In this work, a suitable neural
network is designed which is trained using an appropriate learning
algorithm. The network so trained is used to predict the
manufacturing process parameters.
Abstract: Statistical learning theory was developed by Vapnik. It
is a learning theory based on Vapnik-Chervonenkis dimension. It also
has been used in learning models as good analytical tools. In general, a
learning theory has had several problems. Some of them are local
optima and over-fitting problems. As well, statistical learning theory
has same problems because the kernel type, kernel parameters, and
regularization constant C are determined subjectively by the art of
researchers. So, we propose an evolutionary statistical learning theory
to settle the problems of original statistical learning theory.
Combining evolutionary computing into statistical learning theory,
our theory is constructed. We verify improved performances of an
evolutionary statistical learning theory using data sets from KDD cup.
Abstract: Software Development Risks Identification (SDRI),
using Fault Tree Analysis (FTA), is a proposed technique to identify
not only the risk factors but also the causes of the appearance of the
risk factors in software development life cycle. The method is based
on analyzing the probable causes of software development failures
before they become problems and adversely affect a project. It uses
Fault tree analysis (FTA) to determine the probability of a particular
system level failures that are defined by A Taxonomy for Sources of
Software Development Risk to deduce failure analysis in which an
undesired state of a system by using Boolean logic to combine a
series of lower-level events. The major purpose of this paper is to use
the probabilistic calculations of Fault Tree Analysis approach to
determine all possible causes that lead to software development risk
occurrence
Abstract: Software estimation accuracy is among the greatest
challenges for software developers. This study aimed at building and
evaluating a neuro-fuzzy model to estimate software projects
development time. The forty-one modules developed from ten
programs were used as dataset. Our proposed approach is compared
with fuzzy logic and neural network model and Results show that the
value of MMRE (Mean of Magnitude of Relative Error) applying
neuro-fuzzy was substantially lower than MMRE applying fuzzy
logic and neural network.
Abstract: The authors present an algorithm for order reduction of linear dynamic systems using the combined advantages of stability equation method and the error minimization by Genetic algorithm. The denominator of the reduced order model is obtained by the stability equation method and the numerator terms of the lower order transfer function are determined by minimizing the integral square error between the transient responses of original and reduced order models using Genetic algorithm. The reduction procedure is simple and computer oriented. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. The proposed algorithm has also been extended for the order reduction of linear multivariable systems. Two numerical examples are solved to illustrate the superiority of the algorithm over some existing ones including one example of multivariable system.