Abstract: Modern managements of water distribution system
(WDS) need water quality models that are able to accurately predict
the dynamics of water quality variations within the distribution system
environment. Before water quality models can be applied to solve
system problems, they should be calibrated. Although former
researchers use GA solver to calibrate relative parameters, it is
difficult to apply on the large-scale or medium-scale real system for
long computational time. In this paper a new method is designed
which combines both macro and detailed model to optimize the water
quality parameters. This new combinational algorithm uses radial
basis function (RBF) metamodeling as a surrogate to be optimized for
the purpose of decreasing the times of time-consuming water quality
simulation and can realize rapidly the calibration of pipe wall reaction
coefficients of chlorine model of large-scaled WDS. After two cases
study this method is testified to be more efficient and promising, and
deserve to generalize in the future.
Abstract: In any distributed systems, process scheduling plays a
vital role in determining the efficiency of the system. Process scheduling algorithms are used to ensure that the components of the
system would be able to maximize its utilization and able to complete all the processes assigned in a specified period of time.
This paper focuses on the development of comparative simulator for distributed process scheduling algorithms. The objectives of the works that have been carried out include the development of the
comparative simulator, as well as to implement a comparative study
between three distributed process scheduling algorithms; senderinitiated,
receiver-initiated and hybrid sender-receiver-initiated
algorithms. The comparative study was done based on the Average Waiting Time (AWT) and Average Turnaround Time (ATT) of the
processes involved. The simulation results show that the performance of the algorithms depends on the number of nodes in the system.
Abstract: Using vision based solution in intelligent vehicle application often needs large memory to handle video stream and image process which increase complexity of hardware and software. In this paper, we present a FPGA implement of a vision based lane departure warning system. By taking frame of videos, the line gradient of line is estimated and the lane marks are found. By analysis the position of lane mark, departure of vehicle will be detected in time. This idea has been implemented in Xilinx Spartan6 FPGA. The lane departure warning system used 39% logic resources and no memory of the device. The average availability is 92.5%. The frame rate is more than 30 frames per second (fps).
Abstract: Many exist studies always use Markov decision
processes (MDPs) in modeling optimal route choice in
stochastic, time-varying networks. However, taking many
variable traffic data and transforming them into optimal route
decision is a computational challenge by employing MDPs in
real transportation networks. In this paper we model finite
horizon MDPs using directed hypergraphs. It is shown that the
problem of route choice in stochastic, time-varying networks
can be formulated as a minimum cost hyperpath problem, and
it also can be solved in linear time. We finally demonstrate the
significant computational advantages of the introduced
methods.
Abstract: With the advent of inexpensive 32 bit floating point digital signal processor-s availability in market, many computationally intensive algorithms such as Kalman filter becomes feasible to implement in real time. Dynamic simulation of a self excited DC motor using second order state variable model and implementation of Kalman Filter in a floating point DSP TMS320C6713 is presented in this paper with an objective to introduce and implement such an algorithm, for beginners. A fractional hp DC motor is simulated in both Matlab® and DSP and the results are included. A step by step approach for simulation of DC motor in Matlab® and “C" routines in CC Studio® is also given. CC studio® project file details and environmental setting requirements are addressed. This tutorial can be used with 6713 DSK, which is based on floating point DSP and CC Studio either in hardware mode or in simulation mode.
Abstract: These days people love to travel around the world.
Regardless of their location and time, they especially Muslims still
need to perform their prayers. Normally for travelers, they need to
bring maps, compass and for Muslim, they even have to bring Qibla
pointer when they travel. It is slightly difficult to determine the Qibla
direction and to know the time for each prayer. As the technology
grows, many PDA equip with maps and GPS to locate their location.
In this paper we present a new electronic device called Mobile Qibla
and Prayer Time Finder to locate the Qibla direction and to
determine each prayer time based on the current user-s location using
PDA. This device use PIC microcontroller equipped with digital
compass where it will communicate with PDA using Bluetooth
technology and display the exact Qibla direction and prayer time
automatically at any place in the world. This device is reliable and
accurate in determining the Qibla direction and prayer time.
Abstract: Resins are used in nuclear power plants for water
ultrapurification. Two approaches are considered in this work:
column experiments and simulations. A software called OPTIPUR
was developed, tested and used. The approach simulates the onedimensional
reactive transport in porous medium with convectivedispersive
transport between particles and diffusive transport within
the boundary layer around the particles. The transfer limitation in the
boundary layer is characterized by the mass transfer coefficient
(MTC). The influences on MTC were measured experimentally. The
variation of the inlet concentration does not influence the MTC; on
the contrary of the Darcy velocity which influences. This is consistent
with results obtained using the correlation of Dwivedi&Upadhyay.
With the MTC, knowing the number of exchange site and the relative
affinity, OPTIPUR can simulate the column outlet concentration
versus time. Then, the duration of use of resins can be predicted in
conditions of a binary exchange.
Abstract: This paper deals optimized model to investigate the
effects of peak current, pulse on time and pulse off time in EDM performance on material removal rate of titanium alloy utilizing copper tungsten as electrode and positive polarity of the electrode. The experiments are carried out on Ti6Al4V. Experiments were
conducted by varying the peak current, pulse on time and pulse off time. A mathematical model is developed to correlate the influences of these variables and material removal rate of workpiece. Design of
experiments (DOE) method and response surface methodology
(RSM) techniques are implemented. The validity test of the fit and adequacy of the proposed models has been carried out through
analysis of variance (ANOVA). The obtained results evidence that as
the material removal rate increases as peak current and pulse on time
increases. The effect of pulse off time on MRR changes with peak ampere. The optimum machining conditions in favor of material removal rate are verified and compared. The optimum machining
conditions in favor of material removal rate are estimated and verified with proposed optimized results. It is observed that the developed model is within the limits of the agreeable error (about
4%) when compared to experimental results. This result leads to desirable material removal rate and economical industrial machining to optimize the input parameters.
Abstract: The RR interval series is non-stationary and unevenly
spaced in time. For estimating its power spectral density (PSD) using
traditional techniques like FFT, require resampling at uniform
intervals. The researchers have used different interpolation
techniques as resampling methods. All these resampling methods
introduce the low pass filtering effect in the power spectrum. The
lomb transform is a means of obtaining PSD estimates directly from
irregularly sampled RR interval series, thus avoiding resampling. In
this work, the superiority of Lomb transform method has been
established over FFT based approach, after applying linear and
cubicspline interpolation as resampling methods, in terms of
reproduction of exact frequency locations as well as the relative
magnitudes of each spectral component.
Abstract: Research on damage of gears and gear pairs using
vibration signals remains very attractive, because vibration signals
from a gear pair are complex in nature and not easy to interpret.
Predicting gear pair defects by analyzing changes in vibration signal
of gears pairs in operation is a very reliable method. Therefore, a
suitable vibration signal processing technique is necessary to extract
defect information generally obscured by the noise from dynamic
factors of other gear pairs.This article presents the value of cepstrum
analysis in vehicle gearbox fault diagnosis. Cepstrum represents the
overall power content of a whole family of harmonics and sidebands
when more than one family of sidebands is present at the same time.
The concept for the measurement and analysis involved in using the
technique are briefly outlined. Cepstrum analysis is used for detection
of an artificial pitting defect in a vehicle gearbox loaded with
different speeds and torques. The test stand is equipped with three
dynamometers; the input dynamometer serves asthe internal
combustion engine, the output dynamometers introduce the load on
the flanges of the output joint shafts. The pitting defect is
manufactured on the tooth side of a gear of the fifth speed on the
secondary shaft. Also, a method for fault diagnosis of gear faults is
presented based on order Cepstrum. The procedure is illustrated with
the experimental vibration data of the vehicle gearbox. The results
show the effectiveness of Cepstrum analysis in detection and
diagnosis of the gear condition.
Abstract: This paper aims to develop an algorithm of finite
capacity material requirement planning (FCMRP) system for a multistage
assembly flow shop. The developed FCMRP system has two
main stages. The first stage is to allocate operations to the first and
second priority work centers and also determine the sequence of the
operations on each work center. The second stage is to determine the
optimal start time of each operation by using a linear programming
model. Real data from a factory is used to analyze and evaluate the
effectiveness of the proposed FCMRP system and also to guarantee a
practical solution to the user. There are five performance measures,
namely, the total tardiness, the number of tardy orders, the total
earliness, the number of early orders, and the average flow-time. The
proposed FCMRP system offers an adjustable solution which is a
compromised solution among the conflicting performance measures.
The user can adjust the weight of each performance measure to
obtain the desired performance. The result shows that the combination
of FCMRP NP3 and EDD outperforms other combinations
in term of overall performance index. The calculation time for the
proposed FCMRP system is about 10 minutes which is practical for
the planners of the factory.
Abstract: One of the main research directions in CAD/CAM
machining area is the reducing of machining time.
The feedrate scheduling is one of the advanced techniques that
allows keeping constant the uncut chip area and as sequel to keep
constant the main cutting force. They are two main ways for feedrate
optimization. The first consists in the cutting force monitoring, which
presumes to use complex equipment for the force measurement and
after this, to set the feedrate regarding the cutting force variation. The
second way is to optimize the feedrate by keeping constant the
material removal rate regarding the cutting conditions.
In this paper there is proposed a new approach using an extended
database that replaces the system model.
The feedrate scheduling is determined based on the identification
of the reconfigurable machine tool, and the feed value determination
regarding the uncut chip section area, the contact length between tool
and blank and also regarding the geometrical roughness.
The first stage consists in the blank and tool monitoring for the
determination of actual profiles. The next stage is the determination
of programmed tool path that allows obtaining the piece target
profile.
The graphic representation environment models the tool and blank
regions and, after this, the tool model is positioned regarding the
blank model according to the programmed tool path. For each of
these positions the geometrical roughness value, the uncut chip area
and the contact length between tool and blank are calculated. Each of
these parameters are compared with the admissible values and
according to the result the feed value is established.
We can consider that this approach has the following advantages:
in case of complex cutting processes the prediction of cutting force is
possible; there is considered the real cutting profile which has
deviations from the theoretical profile; the blank-tool contact length
limitation is possible; it is possible to correct the programmed tool
path so that the target profile can be obtained.
Applying this method, there are obtained data sets which allow the
feedrate scheduling so that the uncut chip area is constant and, as a
result, the cutting force is constant, which allows to use more
efficiently the machine tool and to obtain the reduction of machining
time.
Abstract: The innovative fuzzy estimator is used to estimate the
ground motion acceleration of the retaining structure in this study. The
Kalman filter without the input term and the fuzzy weighting recursive
least square estimator are two main portions of this method. The
innovation vector can be produced by the Kalman filter, and be
applied to the fuzzy weighting recursive least square estimator to
estimate the acceleration input over time. The excellent performance
of this estimator is demonstrated by comparing it with the use of
difference weighting function, the distinct levels of the measurement
noise covariance and the initial process noise covariance. The
availability and the precision of the proposed method proposed in this
study can be verified by comparing the actual value and the one
obtained by numerical simulation.
Abstract: This paper presents a new high speed simulation methodology to solve the long simulation time problem of CMOS image sensor matrix. Generally, for integrating the pixel matrix in SOC and simulating the system performance, designers try to model the pixel in various modeling languages such as VHDL-AMS, SystemC or Matlab. We introduce a new alternative method based on spice model in cadence design platform to achieve accuracy and reduce simulation time. The simulation results indicate that the pixel output voltage maximum error is at 0.7812% and time consumption reduces from 2.2 days to 13 minutes achieving about 240X speed-up for the 256x256 pixel matrix.
Abstract: In the current study we present a system that is
capable to deliver proxy based differentiated service. It will help the
carrier service node to sell a prepaid service to clients and limit the
use to a particular mobile device or devices for a certain time. The
system includes software and hardware architecture for a mobile
device with moderate computational power, and a secure protocol for
communication between it and its carrier service node. On the
carrier service node a proxy runs on a centralized server to be
capable of implementing cryptographic algorithms, while the mobile
device contains a simple embedded processor capable of executing
simple algorithms. One prerequisite is needed for the system to run
efficiently that is a presence of Global Trusted Verification Authority
(GTVA) which is equivalent to certifying authority in IP networks.
This system appears to be of great interest for many commercial
transactions, business to business electronic and mobile commerce,
and military applications.
Abstract: Outsourcing, a management practice strongly
consolidated within the area of Information Systems, is currently
going through a stage of unstoppable growth. This paper makes a
proposal about the main reasons which may lead firms to adopt
Information Systems Outsourcing. It will equally analyse the
potential risks that IS clients are likely to face. An additional
objective is to assess these reasons and risks in the case of large
Spanish firms, while simultaneously examining their evolution over
time.
Abstract: Some of the students' problems in writing skill stem
from inadequate preparation for the writing assignment. Students
should be taught how to write well when they arrive in language
classes. Having selected a topic, the students examine and explore the
theme from as large a variety of viewpoints as their background and
imagination make possible. Another strategy is that the students
prepare an Outline before writing the paper. The comparison between
the two mentioned thought provoking techniques was carried out
between the two class groups –students of Islamic Azad University of
Dezful who were studying “Writing 2" as their main course. Each
class group was assigned to write five compositions separately in
different periods of time. Then a t-test for each pair of exams between
the two class groups showed that the t-observed in each pair was
more than the t-critical. Consequently, the first hypothesis which
states those who utilize Brainstorming as a thought provoking
technique in prewriting phase are more successful than those who
outline the papers before writing was verified.
Abstract: Three sulphonic acid-doped polyanilines were
synthesized through chemical oxidation at low temperature (0-5 oC)
and potential of these polymers as sensing agent for O2 gas detection
in terms of fluorescence quenching was studied. Sulphuric acid,
dodecylbenzene sulphonic acid (DBSA) and camphor sulphonic acid
(CSA) were used as doping agents. All polymers obtained were dark
green powder. Polymers obtained were characterized by Fourier
transform infrared spectroscopy, ultraviolet-visible absorption
spectroscopy, thermogravimetry analysis, elemental analysis,
differential scanning calorimeter and gel permeation
chromatography. Characterizations carried out showed that polymers
were successfully synthesized with mass recovery for sulphuric aciddoped
polyaniline (SPAN), DBSA-doped polyaniline (DBSA-doped
PANI) and CSA-doped polyaniline (CSA-doped PANI) of 71.40%,
75.00% and 39.96%, respectively. Doping level of SPAN, DBSAdoped
PANI and CSA-doped PANI were 32.86%, 33.13% and
53.96%, respectively as determined based on elemental analysis.
Sensing test was carried out on polymer sample in the form of
solution and film by using fluorescence spectrophotometer. Samples
of polymer solution and polymer film showed positive response
towards O2 exposure. All polymer solutions and films were fully
regenerated by using N2 gas within 1 hour period. Photostability
study showed that all samples of polymer solutions and films were
stable towards light when continuously exposed to xenon lamp for 9
hours. The relative standard deviation (RSD) values for SPAN
solution, DBSA-doped PANI solution and CSA-doped PANI
solution for repeatability were 0.23%, 0.64% and 0.76%,
respectively. Meanwhile RSD values for reproducibility were 2.36%,
6.98% and 1.27%, respectively. Results for SPAN film, DBSAdoped
PANI film and CSA-doped PANI film showed the same
pattern with RSD values for repeatability of 0.52%, 4.05% and
0.90%, respectively. Meanwhile RSD values for reproducibility were
2.91%, 10.05% and 7.42%, respectively. The study on effect of the
flow rate on response time was carried out using 3 different rates
which were 0.25 mL/s, 1.00 mL/s and 2.00 mL/s. Results obtained
showed that the higher the flow rate, the shorter the response time.
Abstract: Many studies have shown that parallelization decreases efficiency [1], [2]. There are many reasons for these decrements. This paper investigates those which appear in the context of parallel data integration. Integration processes generally cannot be allocated to packages of identical size (i. e. tasks of identical complexity). The reason for this is unknown heterogeneous input data which result in variable task lengths. Process delay is defined by the slowest processing node. It leads to a detrimental effect on the total processing time. With a real world example, this study will show that while process delay does initially increase with the introduction of more nodes it ultimately decreases again after a certain point. The example will make use of the cloud computing platform Hadoop and be run inside Amazon-s EC2 compute cloud. A stochastic model will be set up which can explain this effect.
Abstract: An additive fuzzy system comprising m rules with
n inputs and p outputs in each rule has at least t m(2n + 2 p + 1)
parameters needing to be tuned. The system consists of a large
number of if-then fuzzy rules and takes a long time to tune its
parameters especially in the case of a large amount of training data
samples. In this paper, a new learning strategy is investigated to cope
with this obstacle. Parameters that tend toward constant values at the
learning process are initially fixed and they are not tuned till the end
of the learning time. Experiments based on applications of the
additive fuzzy system in function approximation demonstrate that the
proposed approach reduces the learning time and hence improves
convergence speed considerably.