Abstract: A novel hybrid model of the lumbar spine, allowing
fast static and dynamic simulations of the disc pressure
and the spine mobility, is introduced in this work. Our
contribution is to combine rigid bodies, deformable finite
elements, articular constraints, and springs into a unique model
of the spine. Each vertebra is represented by a rigid body
controlling a surface mesh to model contacts on the facet
joints and the spinous process. The discs are modeled using
a heterogeneous tetrahedral finite element model. The facet
joints are represented as elastic joints with six degrees of
freedom, while the ligaments are modeled using non-linear
one-dimensional elastic elements. The challenge we tackle
is to make these different models efficiently interact while
respecting the principles of Anatomy and Mechanics.
The mobility, the intradiscal pressure, the facet joint force and
the instantaneous center of rotation of the lumbar spine are
validated against the experimental and theoretical results of
the literature on flexion, extension, lateral bending as well as
axial rotation.
Our hybrid model greatly simplifies the modeling task and
dramatically accelerates the simulation of pressure within the
discs, as well as the evaluation of the range of motion and the
instantaneous centers of rotation, without penalizing precision.
These results suggest that for some types of biomechanical
simulations, simplified models allow far easier modeling and
faster simulations compared to usual full-FEM approaches
without any loss of accuracy.
Abstract: Verification and Validation of Simulated Process
Model is the most important phase of the simulator life cycle.
Evaluation of simulated process models based on Verification and
Validation techniques checks the closeness of each component model
(in a simulated network) with the real system/process with respect to
dynamic behaviour under steady state and transient conditions. The
process of Verification and Validation helps in qualifying the process
simulator for the intended purpose whether it is for providing
comprehensive training or design verification. In general, model
verification is carried out by comparison of simulated component
characteristics with the original requirement to ensure that each step
in the model development process completely incorporates all the
design requirements. Validation testing is performed by comparing
the simulated process parameters to the actual plant process
parameters either in standalone mode or integrated mode.
A Full Scope Replica Operator Training Simulator for PFBR -
Prototype Fast Breeder Reactor has been developed at IGCAR,
Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder
Reactor Simulator) where in the main participants are
engineers/experts belonging to Modeling Team, Process Design and
Instrumentation & Control design team. This paper discusses about
the Verification and Validation process in general, the evaluation
procedure adopted for PFBR operator training Simulator, the
methodology followed for verifying the models, the reference
documents and standards used etc. It details out the importance of
internal validation by design experts, subsequent validation by
external agency consisting of experts from various fields, model
improvement by tuning based on expert’s comments, final
qualification of the simulator for the intended purpose and the
difficulties faced while co-coordinating various activities.
Abstract: Managing and improving efficiency in the current
highly competitive global automotive industry demands that those
companies adopt leaner and more flexible systems. During the past
20 years the domestic automotive industry in North America has been
focusing on establishing new management strategies in order to meet
market demands. The lean management process also known as
Toyota Manufacturing Process (TPS) or lean manufacturing
encompasses tools and techniques that were established in order to
provide the best quality product with the fastest lead time at the
lowest cost. The following paper presents a study that focused on
improving labor efficiency at one of the Big Three (Ford, GM,
Chrysler LLC) domestic automotive facility in North America. The
objective of the study was to utilize several lean management tools in
order to optimize the efficiency and utilization levels at the “Pre-
Marriage” chassis area in a truck manufacturing and assembly
facility. Utilizing three different lean tools (i.e. Standardization of
work, 7 Wastes, and 5S) this research was able to improve efficiency
by 51%, utilization by 246%, and reduce operations by 14%. The
return on investment calculated based on the improvements made
was 284%.
Abstract: Fast changing knowledge systems on the Internet can
be accessed more efficiently with the help of automatic document
summarization and updating techniques. The aim of multi-document
update summary generation is to construct a summary unfolding the
mainstream of data from a collection of documents based on the
hypothesis that the user has already read a set of previous documents.
In order to provide a lot of semantic information from the documents,
deeper linguistic or semantic analysis of the source documents were
used instead of relying only on document word frequencies to select
important concepts. In order to produce a responsive summary,
meaning oriented structural analysis is needed. To address this issue,
the proposed system presents a document summarization approach
based on sentence annotation with aspects, prepositions and named
entities. Semantic element extraction strategy is used to select
important concepts from documents which are used to generate
enhanced semantic summary.
Abstract: Iris codes contain bits with different entropy. This
work investigates different strategies to reduce the size of iris
code templates with the aim of reducing storage requirements and
computational demand in the matching process. Besides simple subsampling
schemes, also a binary multi-resolution representation as
used in the JBIG hierarchical coding mode is assessed. We find that
iris code template size can be reduced significantly while maintaining
recognition accuracy. Besides, we propose a two-stage identification
approach, using small-sized iris code templates in a pre-selection
stage, and full resolution templates for final identification, which
shows promising recognition behaviour.
Abstract: The adsorption efficiency of fired clayey pellets of 5
and 8 mm diameter size for Cu(II) and Zn(II) ion removal from a
waste printing developer was studied. In order to investigate the
influence of contact time, adsorbent mass and pellet size on the
adsorption efficiency the batch mode was carried out. Faster uptake
of copper ion was obtained with the fired clay pellets of 5 mm
diameter size within 30 minutes. The pellets of 8 mm diameter size
showed the higher equilibrium time (60 to 75 minutes) for copper and
zinc ion. The results pointed out that adsorption efficiency increases
with the increase of adsorbent mass. The maximal efficiency is
different for Cu(II) and Zn(II) ion due to the pellet size. Therefore,
the fired clay pellets of 5 mm diameter size present an effective
adsorbent for Cu(II) ion removal (adsorption efficiency is 63.6%),
whereas the fired clay pellets of 8 mm diameter size are the best
alternative for Zn(II) ion removal (adsorption efficiency is 92.8%)
from a waste printing developer.
Abstract: In this paper a real-time obstacle avoidance approach
for both autonomous and non-autonomous dynamical systems (DS) is
presented. In this approach the original dynamics of the controller
which allow us to determine safety margin can be modulated.
Different common types of DS increase the robot’s reactiveness in
the face of uncertainty in the localization of the obstacle especially
when robot moves very fast in changeable complex environments.
The method is validated by simulation and influence of different
autonomous and non-autonomous DS such as important
characteristics of limit cycles and unstable DS. Furthermore, the
position of different obstacles in complex environment is explained.
Finally, the verification of avoidance trajectories is described through
different parameters such as safety factor.
Abstract: Leukaemia is a blood cancer disease that contributes
to the increment of mortality rate in Malaysia each year. There are
two main categories for leukaemia, which are acute and chronic
leukaemia. The production and development of acute leukaemia cells
occurs rapidly and uncontrollable. Therefore, if the identification of
acute leukaemia cells could be done fast and effectively, proper
treatment and medicine could be delivered. Due to the requirement of
prompt and accurate diagnosis of leukaemia, the current study has
proposed unsupervised pixel segmentation based on clustering
algorithm in order to obtain a fully segmented abnormal white blood
cell (blast) in acute leukaemia image. In order to obtain the
segmented blast, the current study proposed three clustering
algorithms which are k-means, fuzzy c-means and moving k-means
algorithms have been applied on the saturation component image.
Then, median filter and seeded region growing area extraction
algorithms have been applied, to smooth the region of segmented
blast and to remove the large unwanted regions from the image,
respectively. Comparisons among the three clustering algorithms are
made in order to measure the performance of each clustering
algorithm on segmenting the blast area. Based on the good sensitivity
value that has been obtained, the results indicate that moving kmeans
clustering algorithm has successfully produced the fully
segmented blast region in acute leukaemia image. Hence, indicating
that the resultant images could be helpful to haematologists for
further analysis of acute leukaemia.
Abstract: In-memory database systems are becoming popular
due to the availability and affordability of sufficiently large RAM and
processors in modern high-end servers with the capacity to manage
large in-memory database transactions. While fast and reliable inmemory
systems are still being developed to overcome cache misses,
CPU/IO bottlenecks and distributed transaction costs, disk-based data
stores still serve as the primary persistence. In addition, with the
recent growth in multi-tenancy cloud applications and associated
security concerns, many organisations consider the trade-offs and
continue to require fast and reliable transaction processing of diskbased
database systems as an available choice. For these
organizations, the only way of increasing throughput is by improving
the performance of disk-based concurrency control. This warrants a
hybrid database system with the ability to selectively apply an
enhanced disk-based data management within the context of inmemory
systems that would help improve overall throughput.
The general view is that in-memory systems substantially
outperform disk-based systems. We question this assumption and
examine how a modified variation of access invariance that we call
enhanced memory access, (EMA) can be used to allow very high
levels of concurrency in the pre-fetching of data in disk-based
systems. We demonstrate how this prefetching in disk-based systems
can yield close to in-memory performance, which paves the way for
improved hybrid database systems. This paper proposes a novel EMA
technique and presents a comparative study between disk-based EMA
systems and in-memory systems running on hardware configurations
of equivalent power in terms of the number of processors and their
speeds. The results of the experiments conducted clearly substantiate
that when used in conjunction with all concurrency control
mechanisms, EMA can increase the throughput of disk-based systems
to levels quite close to those achieved by in-memory system. The
promising results of this work show that enhanced disk-based
systems facilitate in improving hybrid data management within the
broader context of in-memory systems.
Abstract: The Simulation based VLSI Implementation of
FELICS (Fast Efficient Lossless Image Compression System)
Algorithm is proposed to provide the lossless image compression and
is implemented in simulation oriented VLSI (Very Large Scale
Integrated). To analysis the performance of Lossless image
compression and to reduce the image without losing image quality
and then implemented in VLSI based FELICS algorithm. In FELICS
algorithm, which consists of simplified adjusted binary code for
Image compression and these compression image is converted in
pixel and then implemented in VLSI domain. This parameter is used
to achieve high processing speed and minimize the area and power.
The simplified adjusted binary code reduces the number of arithmetic
operation and achieved high processing speed. The color difference
preprocessing is also proposed to improve coding efficiency with
simple arithmetic operation. Although VLSI based FELICS
Algorithm provides effective solution for hardware architecture
design for regular pipelining data flow parallelism with four stages.
With two level parallelisms, consecutive pixels can be classified into
even and odd samples and the individual hardware engine is
dedicated for each one. This method can be further enhanced by
multilevel parallelisms.
Abstract: Currently, thorium fuel has been especially noticed
because of its proliferation resistance than long half-life alpha emitter
minor actinides, breeding capability in fast and thermal neutron flux
and mono-isotopic naturally abundant. In recent years, efficiency of
minor actinide burning up in PWRs has been investigated. Hence, a
minor actinide-contained thorium based fuel matrix can confront both
proliferation resistance and nuclear waste depletion aims. In the
present work, minor actinide depletion rate in a CANDU-type nuclear
core modeled using MCNP code has been investigated. The obtained
effects of minor actinide load as mixture of thorium fuel matrix on
the core neutronics has been studied with comparing presence and
non-presence of minor actinide component in the fuel matrix.
Depletion rate of minor actinides in the MA-contained fuel has been
calculated using different power loads. According to the obtained
computational data, minor actinide loading in the modeled core
results in more negative reactivity coefficients. The MA-contained
fuel achieves less radial peaking factor in the modeled core. The
obtained computational results showed 140 kg of 464 kg initial load
of minor actinide has been depleted in during a 6-year burn up in 10
MW power.
Abstract: Particles exhausted from cars have adverse impacts on
human health. The study developed a three-dimensional particle
dispersion numerical model including particle coagulation to simulate
the particle concentration distribution under idling conditions in a
residential underground garage. The simulation results demonstrate
that particle disperses much faster in the vertical direction than that in
horizontal direction. The enhancement of particle dispersion in the
vertical direction due to the increase of cars with engine running is
much stronger than that in the car exhaust direction. Particle dispersion
from each pair of adjacent cars has little influence on each other in the
study. Average particle concentration after 120 seconds exhaust is
1.8-4.5 times higher than the initial total particles at ambient
environment. Particle pollution in the residential underground garage
is severe.
Abstract: Artificial Neural Network (ANN) can be trained using
back propagation (BP). It is the most widely used algorithm for
supervised learning with multi-layered feed-forward networks.
Efficient learning by the BP algorithm is required for many practical
applications. The BP algorithm calculates the weight changes of
artificial neural networks, and a common approach is to use a twoterm
algorithm consisting of a learning rate (LR) and a momentum
factor (MF). The major drawbacks of the two-term BP learning
algorithm are the problems of local minima and slow convergence
speeds, which limit the scope for real-time applications. Recently the
addition of an extra term, called a proportional factor (PF), to the
two-term BP algorithm was proposed. The third increases the speed
of the BP algorithm. However, the PF term also reduces the
convergence of the BP algorithm, and criteria for evaluating
convergence are required to facilitate the application of the three
terms BP algorithm. Although these two seem to be closely related,
as described later, we summarize various improvements to overcome
the drawbacks. Here we compare the different methods of
convergence of the new three-term BP algorithm.
Abstract: This paper provides a comparative study on the
performances of standard PID and adaptive PID controllers tested on
travel angle of a 3-Degree-of-Freedom (3-DOF) Quanser bench-top
helicopter. Quanser, a well-known manufacturer of educational
bench-top helicopter has developed Proportional Integration
Derivative (PID) controller with Linear Quadratic Regulator (LQR)
for all travel, pitch and yaw angle of the bench-top helicopter. The
performance of the PID controller is relatively good; however, its
performance could also be improved if the controller is combined
with adaptive element. The objective of this research is to design
adaptive PID controller and then compare the performances of the
adaptive PID with the standard PID. The controller design and test is
focused on travel angle control only. Adaptive method used in this
project is self-tuning controller, which controller’s parameters are
updated online. Two adaptive algorithms those are pole-placement
and deadbeat have been chosen as the method to achieve optimal
controller’s parameters. Performance comparisons have shown that
the adaptive (deadbeat) PID controller has produced more desirable
performance compared to standard PID and adaptive (poleplacement).
The adaptive (deadbeat) PID controller attained very fast
settling time (5 seconds) and very small percentage of overshoot (5%
to 7.5%) for 10° to 30° step change of travel angle.
Abstract: In this work, neural networks methods MLP type were
applied to a database from an array of six sensors for the detection of
three toxic gases. The choice of the number of hidden layers and the
weight values are influential on the convergence of the learning
algorithm. We proposed, in this article, a mathematical formula to
determine the optimal number of hidden layers and good weight
values based on the method of back propagation of errors. The results
of this modeling have improved discrimination of these gases and
optimized the computation time. The model presented here has
proven to be an effective application for the fast identification of
toxic gases.
Abstract: The ad hoc networks are the future of wireless
technology as everyone wants fast and accurate error free information
so keeping this in mind Bit Error Rate (BER) and power is optimized
in this research paper by using the Genetic Algorithm (GA). The
digital modulation techniques used for this paper are Binary Phase
Shift Keying (BPSK), M-ary Phase Shift Keying (M-ary PSK), and
Quadrature Amplitude Modulation (QAM). This work is
implemented on Wireless Ad Hoc Networks (WLAN). Then it is
analyze which modulation technique is performing well to optimize
the BER and power of WLAN.
Abstract: In this paper, we present preconditioned generalized
accelerated overrelaxation (GAOR) methods for solving certain
nonsingular linear system. We compare the spectral radii of the
iteration matrices of the preconditioned and the original methods. The
comparison results show that the preconditioned GAOR methods
converge faster than the GAOR method whenever the GAOR method
is convergent. Finally, we give two numerical examples to confirm our
theoretical results.
Abstract: Urban public spaces are sutured with a range of
surveillance and sensor technologies that claim to enable new forms
of ‘data based citizen participation’, but also increase the tendency
for ‘function-creep’, whereby vast amounts of data are gathered,
stored and analysed in a broad application of urban surveillance. This
kind of monitoring and capacity for surveillance connects with
attempts by civic authorities to regulate, restrict, rebrand and reframe
urban public spaces. A direct consequence of the increasingly
security driven, policed, privatised and surveilled nature of public
space is the exclusion or ‘unfavourable inclusion’ of those considered
flawed and unwelcome in the ‘spectacular’ consumption spaces of
many major urban centres. In the name of urban regeneration,
programs of securitisation, ‘gentrification’ and ‘creative’ and ‘smart’
city initiatives refashion public space as sites of selective inclusion
and exclusion. In this context of monitoring and control procedures,
in particular, children and young people’s use of space in parks,
neighbourhoods, shopping malls and streets is often viewed as a
threat to the social order, requiring various forms of remedial action.
This paper suggests that cities, places and spaces and those who
seek to use them, can be resilient in working to maintain and extend
democratic freedoms and processes enshrined in Marshall’s concept
of citizenship, calling sensor and surveillance systems to account.
Such accountability could better inform the implementation of public
policy around the design, build and governance of public space and
also understandings of urban citizenship in the sensor saturated urban
environment.
Abstract: Today’s modern interconnected power system is
highly complex in nature. In this, one of the most important
requirements during the operation of the electric power system is the
reliability and security. Power and frequency oscillation damping
mechanism improve the reliability. Because of power system
stabilizer (PSS) low speed response against of major fault such as
three phase short circuit, FACTs devise that can control the network
condition in very fast time, are becoming popular. But FACTs
capability can be seen in a major fault present when nonlinear models
of FACTs devise and power system equipment are applied. To realize
this aim, the model of multi-machine power system with FACTs
controller is developed in MATLAB/SIMULINK using Sim Power
System (SPS) blockiest. Among the FACTs device, Static
synchronous series compensator (SSSC) due to high speed changes
its reactance characteristic inductive to capacitive, is effective power
flow controller. Tuning process of controller parameter can be
performed using different method. But Genetic Algorithm (GA)
ability tends to use it in controller parameter tuning process. In this
paper firstly POD controller is used to power oscillation damping.
But in this station, frequency oscillation dos not has proper damping
situation. So FOD controller that is tuned using GA is using that
cause to damp out frequency oscillation properly and power
oscillation damping has suitable situation.
Abstract: This paper proposes a backward/forward sweep
method to analyze the power flow in radial distribution systems. The
distribution system has radial structure and high R/X ratios. So the
newton-raphson and fast decoupled methods are failed with
distribution system. The proposed method presents a load flow study
using backward/forward sweep method, which is one of the most
effective methods for the load-flow analysis of the radial distribution
system. By using this method, power losses for each bus branch and
voltage magnitudes for each bus node are determined. This method
has been tested on IEEE 33-bus radial distribution system and
effective results are obtained using MATLAB.