Abstract: Modelling is a widely used tool to facilitate the evaluation of disease management. The interest of epidemiological models lies in their ability to explore hypothetical scenarios and provide decision makers with evidence to anticipate the consequences of disease incursion and impact of intervention strategies.
All models are, by nature, simplification of more complex systems. Models that involve diseases can be classified into different categories depending on how they treat the variability, time, space, and structure of the population. Approaches may be different from simple deterministic mathematical models, to complex stochastic simulations spatially explicit.
Thus, epidemiological modelling is now a necessity for epidemiological investigations, surveillance, testing hypotheses and generating follow-up activities necessary to perform complete and appropriate analysis.
The state of the art presented in the following, allows us to position itself to the most appropriate approaches in the epidemiological study.
Abstract: The aim of the study was to investigate the possible
use of commercial Computational Fluid Dynamics (CFD) software in
the design process of a domestic gas boiler. Because of the limited
computational resources some simplifications had to be made in
order to contribute to the design in a reasonable timescale.
The porous media model was used in order to simulate the
influence of the pressure drop characteristic of particular elements of
a heat transfer system on the water-flow distribution in the system.
Further, a combination of CFD analyses and spread sheet
calculations was used in order to solve the flow distribution problem.
Abstract: Adaptive e-learning today gives the student a central
role in his own learning process. It allows learners to try things out,
participate in courses like never before, and get more out of learning
than before. In this paper, an adaptive e-learning model for logic
design, simplification of Boolean functions and related fields is
presented. Such model presents suitable courses for each student in a
dynamic and adaptive manner using existing database and workflow
technologies. The main objective of this research work is to provide
an adaptive e-learning model based learners' personality using
explicit and implicit feedback. To recognize the learner-s, we develop
dimensions to decide each individual learning style in order to
accommodate different abilities of the users and to develop vital
skills. Thus, the proposed model becomes more powerful, user
friendly and easy to use and interpret. Finally, it suggests a learning
strategy and appropriate electronic media that match the learner-s
preference.
Abstract: In this paper, we propose a Connect6 solver which
adopts a hybrid approach based on a tree-search algorithm and image
processing techniques. The solver must deal with the complicated
computation and provide high performance in order to make real-time
decisions. The proposed approach enables the solver to be
implemented on a single Spartan-6 XC6SLX45 FPGA produced by
XILINX without using any external devices. The compact
implementation is achieved through image processing techniques to
optimize a tree-search algorithm of the Connect6 game. The tree
search is widely used in computer games and the optimal search brings
the best move in every turn of a computer game. Thus, many
tree-search algorithms such as Minimax algorithm and artificial
intelligence approaches have been widely proposed in this field.
However, there is one fundamental problem in this area; the
computation time increases rapidly in response to the growth of the
game tree. It means the larger the game tree is, the bigger the circuit
size is because of their highly parallel computation characteristics.
Here, this paper aims to reduce the size of a Connect6 game tree using
image processing techniques and its position symmetric property. The
proposed solver is composed of four computational modules: a
two-dimensional checkmate strategy checker, a template matching
module, a skilful-line predictor, and a next-move selector. These
modules work well together in selecting next moves from some
candidates and the total amount of their circuits is small. The details of
the hardware design for an FPGA implementation are described and
the performance of this design is also shown in this paper.
Abstract: In this paper a new fast simplification method is
presented. Such method realizes Karnough map with large
number of variables. In order to accelerate the operation of the
proposed method, a new approach for fast detection of group
of ones is presented. Such approach implemented in the
frequency domain. The search operation relies on performing
cross correlation in the frequency domain rather than time one.
It is proved mathematically and practically that the number of
computation steps required for the presented method is less
than that needed by conventional cross correlation. Simulation
results using MATLAB confirm the theoretical computations.
Furthermore, a powerful solution for realization of complex
functions is given. The simplified functions are implemented
by using a new desigen for neural networks. Neural networks
are used because they are fault tolerance and as a result they
can recognize signals even with noise or distortion. This is
very useful for logic functions used in data and computer
communications. Moreover, the implemented functions are
realized with minimum amount of components. This is done
by using modular neural nets (MNNs) that divide the input
space into several homogenous regions. Such approach is
applied to implement XOR function, 16 logic functions on one
bit level, and 2-bit digital multiplier. Compared to previous
non- modular designs, a clear reduction in the order of
computations and hardware requirements is achieved.
Abstract: This paper is devoted to predict laminar and turbulent
heating rates around blunt re-entry spacecraft at hypersonic
conditions. Heating calculation of a hypersonic body is normally
performed during the critical part of its flight trajectory. The
procedure is of an inverse method, where a shock wave is assumed,
and the body shape that supports this shock, as well as the flowfield
between the shock and body, are calculated. For simplicity the
normal momentum equation is replaced with a second order pressure
relation; this simplification significantly reduces computation time.
The geometries specified in this research, are parabola and ellipsoids
which may have conical after bodies. An excellent agreement is
observed between the results obtained in this paper and those
calculated by others- research. Since this method is much faster than
Navier-Stokes solutions, it can be used in preliminary design,
parametric study of hypersonic vehicles.
Abstract: The indoor airflow with a mixed natural/forced convection
was numerically calculated using the laminar and turbulent
approach. The Boussinesq approximation was considered for a simplification
of the mathematical model and calculations. The results
obtained, such as mean velocity fields, were successfully compared
with experimental PIV flow visualizations. The effect of the distance
between the cooled wall and the heat exchanger on the temperature
and velocity distributions was calculated. In a room with a simple
shape, the computational code OpenFOAM demonstrated an ability to
numerically predict flow patterns. Furthermore, numerical techniques,
boundary type conditions and the computational grid quality were
examined. Calculations using the turbulence model k-omega had a
significant effect on the results influencing temperature and velocity
distributions.
Abstract: The visualization of geographic information on mobile devices has become popular as the widespread use of mobile Internet. The mobility of these devices brings about much convenience to people-s life. By the add-on location-based services of the devices, people can have an access to timely information relevant to their tasks. However, visual analysis of geographic data on mobile devices presents several challenges due to the small display and restricted computing resources. These limitations on the screen size and resources may impair the usability aspects of the visualization applications. In this paper, a variable-scale visualization method is proposed to handle the challenge of small mobile display. By merging multiple scales of information into a single image, the viewer is able to focus on the interesting region, while having a good grasp of the surrounding context. This is essentially visualizing the map through a fisheye lens. However, the fisheye lens induces undesirable geometric distortion in the peripheral, which renders the information meaningless. The proposed solution is to apply map generalization that removes excessive information around the peripheral and an automatic smoothing process to correct the distortion while keeping the local topology consistent. The proposed method is applied on both artificial and real geographical data for evaluation.
Abstract: The main objective of this paper is to provide an efficient tool for delineating brain tumors in three-dimensional magnetic resonance images. To achieve this goal, we use basically a level-sets approach to delineating three-dimensional brain tumors. Then we introduce a compression plan of 3D brain structures based for the meshes simplification, adapted for time to the specific needs of the telemedicine and to the capacities restricted by network communication. We present here the main stages of our system, and preliminary results which are very encouraging for clinical practice.
Abstract: Nowadays increasingly the population makes use of
Information Technology (IT). As such, in recent year the Portuguese
government increased its focus on using the IT for improving
people-s life and began to develop a set of measures to enable the
modernization of the Public Administration, and so reducing the gap
between Public Administration and citizens.Thus the Portuguese
Government launched the Simplex Program. However these
SIMPLEX eGov measures, which have been implemented over the
years, present a serious challenge: how to forecast its impact on
existing Information Systems Architecture (ISA). Thus, this research
is focus in addressing the problem of automating the evaluation of the
actual impact of implementation an eGovSimplification and
Modernization measures in the Information Systems Architecture. To
realize the evaluation we proposes a Framework, which is supported
by some key concepts as: Quality Factors, ISA modeling,
Multicriteria Approach, Polarity Profile and Quality Metrics
Abstract: We suggest a novel method to incorporate longterm
redundancy (LTR) in signal time domain compression
methods. The proposition is based on block-sorting and curve
simplification. The proposition is illustrated on the ECG
signal as a post-processor for the FAN method. Test
applications on the new so-obtained FAN+ method using the
MIT-BIH database show substantial improvement of the
compression ratio-distortion behavior for a higher quality
reconstructed signal.
Abstract: we propose a new normalized LMS (NLMS) algorithm, which gives satisfactory performance in certain applications in comaprison with con-ventional NLMS recursion. This new algorithm can be treated as a block based simplification of NLMS algorithm with significantly reduced number of multi¬ply and accumulate as well as division operations. It is also shown that such a recursion can be easily implemented in block floating point (BFP) arithmetic, treating the implementational issues much efficiently. In particular, the core challenges of a BFP realization to such adaptive filters are mainly considered in this regard. A global upper bound on the step size control parameter of the new algorithm due to BFP implementation is also proposed to prevent overflow in filtering as well as weight updating operations jointly.
Abstract: Shadows add great amount of realism to a scene and
many algorithms exists to generate shadows. Recently, Shadow
volumes (SVs) have made great achievements to place a valuable
position in the gaming industries. Looking at this, we concentrate on
simple but valuable initial partial steps for further optimization in SV
generation, i.e.; model simplification and silhouette edge detection
and tracking. Shadow volumes (SVs) usually takes time in generating
boundary silhouettes of the object and if the object is complex then
the generation of edges become much harder and slower in process.
The challenge gets stiffer when real time shadow generation and
rendering is demanded. We investigated a way to use the real time
silhouette edge detection method, which takes the advantage of
spatial and temporal coherence, and exploit the level-of-details
(LOD) technique for reducing silhouette edges of the model to use
the simplified version of the model for shadow generation speeding
up the running time. These steps highly reduce the execution time of
shadow volume generations in real-time and are easily flexible to any
of the recently proposed SV techniques. Our main focus is to exploit
the LOD and silhouette edge detection technique, adopting them to
further enhance the shadow volume generations for real time
rendering.
Abstract: In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.
Abstract: The calculation of buckling length factor (K) for steel
frames columns is a major and governing processes to determine the
dimensions steel frame columns cross sections during design. The
buckling length of steel frames columns has a direct effect on the cost
(weight) of using cross section. A new formula is required to
determine buckling length factor (K) by simplified way. In this
research a new formula for buckling length factor (K) was established
to determine by accurate method for a limited interval of columns
ends rigidity (GA, GB). The new formula can be used ease to
evaluate the buckling length factor without needing to complicated
equations or difficult charts.
Abstract: The use of the mechanical simulation (in particular the finite element analysis) requires the management of assumptions in order to analyse a real complex system. In finite element analysis (FEA), two modeling steps require assumptions to be able to carry out the computations and to obtain some results: the building of the physical model and the building of the simulation model. The simplification assumptions made on the analysed system in these two steps can generate two kinds of errors: the physical modeling errors (mathematical model, domain simplifications, materials properties, boundary conditions and loads) and the mesh discretization errors. This paper proposes a mesh adaptive method based on the use of an h-adaptive scheme in combination with an error estimator in order to choose the mesh of the simulation model. This method allows us to choose the mesh of the simulation model in order to control the cost and the quality of the finite element analysis.
Abstract: Internet infrastructures in most places of the world
have been supported by the advancement of optical fiber technology,
most notably wavelength division multiplexing (WDM) system.
Optical technology by means of WDM system has revolutionized
long distance data transport and has resulted in high data capacity,
cost reductions, extremely low bit error rate, and operational
simplification of the overall Internet infrastructure. This paper
analyses and compares the system impairments, which occur at data
transmission rates of 2.5Gb/s and 10 Gb/s per wavelength channel in
our proposed optical WDM system for Internet infrastructure in
Tanzania. The results show that the data transmission rate of 2.5 Gb/s
has minimum system impairments compared with a rate of 10 Gb/s
per wavelength channel, and achieves a sufficient system
performance to provide a good Internet access service.
Abstract: Introducing survivability into embedded real-time system (ERTS) can improve the survivability power of the system. This paper mainly discusses about the survivability of ERTS. The first is the survivability origin of ERTS. The second is survivability analysis. According to the definition of survivability based on survivability specification and division of the entire survivability analysis process for ERTS, a survivability analysis profile is presented. The quantitative analysis model of this profile is emphasized and illuminated in detail, the quantifying analysis of system was showed helpful to evaluate system survivability more accurate. The third is platform design of survivability analysis. In terms of the profile, the analysis process is encapsulated and assembled into one platform, on which quantification, standardization and simplification of survivability analysis are all achieved. The fourth is survivability design. According to character of ERTS, strengthened design method is selected to realize system survivability design. Through the analysis of embedded mobile video-on-demand system, intrusion tolerant technology is introduced in whole survivability design.
Abstract: Using neural network we try to model the unknown function f for given input-output data pairs. The connection strength of each neuron is updated through learning. Repeated simulations of crisp neural network produce different values of weight factors that are directly affected by the change of different parameters. We propose the idea that for each neuron in the network, we can obtain quasi-fuzzy weight sets (QFWS) using repeated simulation of the crisp neural network. Such type of fuzzy weight functions may be applied where we have multivariate crisp input that needs to be adjusted after iterative learning, like claim amount distribution analysis. As real data is subjected to noise and uncertainty, therefore, QFWS may be helpful in the simplification of such complex problems. Secondly, these QFWS provide good initial solution for training of fuzzy neural networks with reduced computational complexity.
Abstract: The main objective of this paper is to provide an efficient tool for delineating brain tumors in three-dimensional magnetic resonance images and set up compression-transmit schemes to distribute result to the remote doctor. To achieve this goal, we use basically a level-sets approach to delineating brain tumors in threedimensional. Then introduce a new compression and transmission plan of 3D brain structures based for the meshes simplification, adapted for time to the specific needs of the telemedicine and to the capacities restricted by wireless network communication. We present here the main stages of our system, and preliminary results which are very encouraging for clinical practice.