Abstract: Modern managements of water distribution system
(WDS) need water quality models that are able to accurately predict
the dynamics of water quality variations within the distribution system
environment. Before water quality models can be applied to solve
system problems, they should be calibrated. Although former
researchers use GA solver to calibrate relative parameters, it is
difficult to apply on the large-scale or medium-scale real system for
long computational time. In this paper a new method is designed
which combines both macro and detailed model to optimize the water
quality parameters. This new combinational algorithm uses radial
basis function (RBF) metamodeling as a surrogate to be optimized for
the purpose of decreasing the times of time-consuming water quality
simulation and can realize rapidly the calibration of pipe wall reaction
coefficients of chlorine model of large-scaled WDS. After two cases
study this method is testified to be more efficient and promising, and
deserve to generalize in the future.
Abstract: This research explores the links between physical
development and transportation infrastructure around Kumasi,
Ghana. It utilizes census data as well as fieldwork and interviews
carried out during July and December 2005. The results suggest that
there is a weak association between transportation investments and
physical development, and that recent housing has generally occurred
in poorly accessible locations. Road investments have generally
followed physical expansion rather than the reverse. Hence policies
designed to manage the fast growth now occurring around Ghanaian
cities should not focus exclusively on improving transportation
infrastructure but also strengthening the underlying the traditional
land management structures and the official land administrative
institutions that operate within those structures.
Abstract: Recently, in some places, optical-fibre access
networks have been used with GPON technology belonging to
organizations (in most cases public bodies) that act as neutral
operators. These operators simultaneously provide network services
to various telecommunications operators that offer integrated voice,
data and television services. This situation creates new problems
related to quality of service, since the interests of the users are
intermingled with the interests of the operators. In this paper, we
analyse this problem and consider solutions that make it possible to
provide guaranteed quality of service for voice over IP, data services
and interactive digital television.
Abstract: Deaths from cardiovascular diseases have decreased substantially over the past two decades, largely as a result of advances in acute care and cardiac surgery. These developments have produced a growing population of patients who have survived a myocardial infarction. These patients need to be continuously monitored so that the initiation of treatment can be given within the crucial golden hour. The available conventional methods of monitoring mostly perform offline analysis and restrict the mobility of these patients within a hospital or room. Hence the aim of this paper is to design a Portable Cardiac Telemedicine System to aid the patients to regain their independence and return to an active work schedule, there by improving the psychological well being. The portable telemedicine system consists of a Wearable ECG Transmitter (WET) and a slightly modified mobile phone, which has an inbuilt ECG analyzer. The WET is placed on the body of the patient that continuously acquires the ECG signals from the high-risk cardiac patients who can move around anywhere. This WET transmits the ECG to the patient-s Bluetooth enabled mobile phone using blue tooth technology. The ECG analyzer inbuilt in the mobile phone continuously analyzes the heartbeats derived from the received ECG signals. In case of any panic condition, the mobile phone alerts the patients care taker by an SMS and initiates the transmission of a sample ECG signal to the doctor, via the mobile network.
Abstract: This paper covered a series of key points in terms of 2D to 3D stereoscopic conversion. A successfully applied stereoscopic conversion approach in current visual effects industry was presented. The purpose of this paper is to cover a detailed workflow and concept, which has been successfully used in 3D stereoscopic conversion for feature films in visual effects industry, and therefore to clarify the process in stereoscopic conversion production and provide a clear idea for those entry-level artists to improve an overall understanding of 3D stereoscopic in digital compositing field as well as to the higher education factor of visual effects and hopefully inspire further collaboration and participants particularly between academia and industry.
Abstract: The ElectroEncephaloGram (EEG) is useful for
clinical diagnosis and biomedical research. EEG signals often
contain strong ElectroOculoGram (EOG) artifacts produced
by eye movements and eye blinks especially in EEG recorded
from frontal channels. These artifacts obscure the underlying
brain activity, making its visual or automated inspection
difficult. The goal of ocular artifact removal is to remove
ocular artifacts from the recorded EEG, leaving the underlying
background signals due to brain activity. In recent times,
Independent Component Analysis (ICA) algorithms have
demonstrated superior potential in obtaining the least
dependent source components. In this paper, the independent
components are obtained by using the JADE algorithm (best
separating algorithm) and are classified into either artifact
component or neural component. Neural Network is used for
the classification of the obtained independent components.
Neural Network requires input features that exactly represent
the true character of the input signals so that the neural
network could classify the signals based on those key
characters that differentiate between various signals. In this
work, Auto Regressive (AR) coefficients are used as the input
features for classification. Two neural network approaches
are used to learn classification rules from EEG data. First, a
Polynomial Neural Network (PNN) trained by GMDH (Group
Method of Data Handling) algorithm is used and secondly,
feed-forward neural network classifier trained by a standard
back-propagation algorithm is used for classification and the
results show that JADE-FNN performs better than JADEPNN.
Abstract: The objective of our work is to develop a new approach for discovering knowledge from a large mass of data, the result of applying this approach will be an expert system that will serve as diagnostic tools of a phenomenon related to a huge information system. We first recall the general problem of learning Bayesian network structure from data and suggest a solution for optimizing the complexity by using organizational and optimization methods of data. Afterward we proposed a new heuristic of learning a Multi-Entities Bayesian Networks structures. We have applied our approach to biological facts concerning hereditary complex illnesses where the literatures in biology identify the responsible variables for those diseases. Finally we conclude on the limits arched by this work.
Abstract: The paper proposes a new concept in developing
collaborative design system. The concept framework involves
applying simulation of supply chain management to collaborative
design called – 'SCM–Based Design Tool'. The system is developed
particularly to support design activities and to integrate all facilities
together. The system is aimed to increase design productivity and
creativity. Therefore, designers and customers can collaborate by the
system since conceptual design. JAG: Jewelry Art Generator based
on artificial intelligence techniques is integrated into the system.
Moreover, the proposed system can support users as decision tool
and data propagation. The system covers since raw material supply
until product delivery. Data management and sharing information are
visually supported to designers and customers via user interface. The
system is developed on Web–assisted product development
environment. The prototype system is presented for Thai jewelry
industry as a system prototype demonstration, but applicable for
other industry.
Abstract: In this paper, we present user pattern learning
algorithm based MDSS (Medical Decision support system) under
ubiquitous. Most of researches are focus on hardware system, hospital
management and whole concept of ubiquitous environment even
though it is hard to implement. Our objective of this paper is to design
a MDSS framework. It helps to patient for medical treatment and
prevention of the high risk patient (COPD, heart disease, Diabetes).
This framework consist database, CAD (Computer Aided diagnosis
support system) and CAP (computer aided user vital sign prediction
system). It can be applied to develop user pattern learning algorithm
based MDSS for homecare and silver town service. Especially this
CAD has wise decision making competency. It compares current vital
sign with user-s normal condition pattern data. In addition, the CAP
computes user vital sign prediction using past data of the patient. The
novel approach is using neural network method, wireless vital sign
acquisition devices and personal computer DB system. An intelligent
agent based MDSS will help elder people and high risk patients to
prevent sudden death and disease, the physician to get the online
access to patients- data, the plan of medication service priority (e.g.
emergency case).
Abstract: The modern telecommunication industry demands
higher capacity networks with high data rate. Orthogonal frequency
division multiplexing (OFDM) is a promising technique for high data
rate wireless communications at reasonable complexity in wireless
channels. OFDM has been adopted for many types of wireless
systems like wireless local area networks such as IEEE 802.11a, and
digital audio/video broadcasting (DAB/DVB). The proposed research
focuses on a concatenated coding scheme that improve the
performance of OFDM based wireless communications. It uses a
Redundant Residue Number System (RRNS) code as the outer code
and a convolutional code as the inner code. Here, a direct conversion
of analog signal to residue domain is done to reduce the conversion
complexity using sigma-delta based parallel analog-to-residue
converter. The bit error rate (BER) performances of the proposed
system under different channel conditions are investigated. These
include the effect of additive white Gaussian noise (AWGN),
multipath delay spread, peak power clipping and frame start
synchronization error. The simulation results show that the proposed
RRNS-Convolutional concatenated coding (RCCC) scheme provides
significant improvement in the system performance by exploiting the
inherent properties of RRNS.
Abstract: Selection of maize (Zea mays) hybrids with wide adaptability across diverse farming environments is important, prior to recommending them to achieve a high rate of hybrid adoption. Grain yield of 14 maize hybrids, tested in a randomized completeblock design with four replicates across 22 environments in Iran, was analyzed using site regression (SREG) stability model. The biplot technique facilitates a visual evaluation of superior genotypes, which is useful for cultivar recommendation and mega-environment identification. The objectives of this study were (i) identification of suitable hybrids with both high mean performance and high stability (ii) to determine mega-environments for maize production in Iran. Biplot analysis identifies two mega-environments in this study. The first mega-environments included KRM, KSH, MGN, DZF A, KRJ, DRB, DZF B, SHZ B, and KHM, where G10 hybrid was the best performing hybrid. The second mega-environment included ESF B, ESF A, and SHZ A, where G4 hybrid was the best hybrid. According to the ideal-hybrid biplot, G10 hybrid was better than all other hybrids, followed by the G1 and G3 hybrids. These hybrids were identified as best hybrids that have high grain yield and high yield stability. GGE biplot analysis provided a framework for identifying the target testing locations that discriminates genotypes that are high yielding and stable.
Abstract: The operation performance of a valveless micro-pump
is strongly dependent on the shape of connected nozzle/diffuser and
Reynolds number. The aims of present work are to compare the
performance curves of micropump with the original straight
nozzle/diffuser and contoured nozzle/diffuser under different back
pressure conditions. The tested valveless micropumps are assembled
of five pieces of patterned PMMA plates with hot-embracing
technique. The structures of central chamber, the inlet/outlet
reservoirs and the connected nozzle/diffuser are fabricated with laser
cutting machine. The micropump is actuated with circular-type PZT
film embraced on the bottom of central chamber. The deformation of
PZT membrane with various input voltages is measured with a
displacement laser probe. A simple testing facility is also constructed
to evaluate the performance curves for comparison.
In order to observe the evaluation of low Reynolds number
multiple vortex flow patterns within the micropump during suction
and pumping modes, the unsteady, incompressible laminar
three-dimensional Reynolds-averaged Navier-Stokes equations are
solved. The working fluid is DI water with constant thermo-physical
properties. The oscillating behavior of PZT film is modeled with the
moving boundary wall in way of UDF program. With the dynamic
mesh method, the instants pressure and velocity fields are obtained
and discussed.Results indicated that the volume flow rate is not
monotony increased with the oscillating frequency of PZT film,
regardless of the shapes of nozzle/diffuser. The present micropump
can generate the maximum volume flow rate of 13.53 ml/min when
the operation frequency is 64Hz and the input voltage is 140 volts.
The micropump with contoured nozzle/diffuser can provide 7ml/min
flow rate even when the back pressure is up to 400 mm-H2O. CFD
results revealed that the flow central chamber was occupied with
multiple pairs of counter-rotating vortices during suction and
pumping modes. The net volume flow rate over a complete
oscillating periodic of PZT
Abstract: In this paper we present a novel approach for human
Body configuration based on the Silhouette. We propose to address
this problem under the Bayesian framework. We use an effective
Model based MCMC (Markov Chain Monte Carlo) method to solve
the configuration problem, in which the best configuration could be
defined as MAP (maximize a posteriori probability) in Bayesian
model. This model based MCMC utilizes the human body model to
drive the MCMC sampling from the solution space. It converses the
original high dimension space into a restricted sub-space constructed
by the human model and uses a hybrid sampling algorithm. We
choose an explicit human model and carefully select the likelihood
functions to represent the best configuration solution. The
experiments show that this method could get an accurate
configuration and timesaving for different human from multi-views.
Abstract: This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Abstract: PCCI engines can reduce NOx and PM emissions
simultaneously without sacrificing thermal efficiency, but a low
combustion temperature resulting from early fuel injection, and
ignition occurring prior to TDC, can cause higher THC and CO
emissions and fuel consumption. In conclusion, it was found that the
PCCI combustion achieved by the 2-stage injection strategy with
optimized calibration factors (e.g. EGR rate, injection pressure, swirl
ratio, intake pressure, injection timing) can reduce NOx and PM
emissions simultaneously. This research works are expected to
provide valuable information conducive to a development of an
innovative combustion engine that can fulfill upcoming stringent
emission standards.
Abstract: One of the main advantages of the LO paradigm is to
allow the availability of good quality, shareable learning material
through the Web. The effectiveness of the retrieval process requires a
formal description of the resources (metadata) that closely fits the
user-s search criteria; in spite of the huge international efforts in this
field, educational metadata schemata often fail to fulfil this
requirement. This work aims to improve the situation, by the
definition of a metadata model capturing specific didactic features of
shareable learning resources. It classifies LOs into “teacher-oriented"
and “student-oriented" categories, in order to describe the role a LO
is to play when it is integrated into the educational process. This
article describes the model and a first experimental validation process
that has been carried out in a controlled environment.
Abstract: In any distributed systems, process scheduling plays a
vital role in determining the efficiency of the system. Process scheduling algorithms are used to ensure that the components of the
system would be able to maximize its utilization and able to complete all the processes assigned in a specified period of time.
This paper focuses on the development of comparative simulator for distributed process scheduling algorithms. The objectives of the works that have been carried out include the development of the
comparative simulator, as well as to implement a comparative study
between three distributed process scheduling algorithms; senderinitiated,
receiver-initiated and hybrid sender-receiver-initiated
algorithms. The comparative study was done based on the Average Waiting Time (AWT) and Average Turnaround Time (ATT) of the
processes involved. The simulation results show that the performance of the algorithms depends on the number of nodes in the system.
Abstract: This work considered the thermodynamic feasibility
of scrubbing volatile organic compounds into biodiesel in view of
designing a gas treatment process with this absorbent. A detailed
vapour – liquid equilibrium investigation was performed using the
original UNIFAC group contribution method. The four biodiesels
studied in this work are methyl oleate, methyl palmitate, methyl
linolenate and ethyl stearate. The original UNIFAC procedure was
used to estimate the infinite dilution activity coefficients of 13
selected volatile organic compounds in the biodiesels. The
calculations were done at the VOC mole fraction of 9.213x10-8. Ethyl
stearate gave the most favourable phase equilibrium. A close
agreement was found between the infinite dilution activity coefficient
of toluene found in this work and those reported in literature.
Thermodynamic models can efficiently be used to calculate vast
amount of phase equilibrium behaviour using limited number of
experimental data.
Abstract: Recent years have seen a growing trend towards the
integration of multiple information sources to support large-scale
prediction of protein-protein interaction (PPI) networks in model
organisms. Despite advances in computational approaches, the
combination of multiple “omic" datasets representing the same type
of data, e.g. different gene expression datasets, has not been
rigorously studied. Furthermore, there is a need to further investigate
the inference capability of powerful approaches, such as fullyconnected
Bayesian networks, in the context of the prediction of PPI
networks. This paper addresses these limitations by proposing a
Bayesian approach to integrate multiple datasets, some of which
encode the same type of “omic" data to support the identification of
PPI networks. The case study reported involved the combination of
three gene expression datasets relevant to human heart failure (HF).
In comparison with two traditional methods, Naive Bayesian and
maximum likelihood ratio approaches, the proposed technique can
accurately identify known PPI and can be applied to infer potentially
novel interactions.
Abstract: Photovoltaic power generation forecasting is an
important task in renewable energy power system planning and
operating. This paper explores the application of neural networks
(NN) to study the design of photovoltaic power generation
forecasting systems for one week ahead using weather databases
include the global irradiance, and temperature of Ghardaia city
(south of Algeria) using a data acquisition system. Simulations were
run and the results are discussed showing that neural networks
Technique is capable to decrease the photovoltaic power generation
forecasting error.