Abstract: Sedimentation process resulting from soil erosion in
the water basin especially in arid and semi-arid where poor
vegetation cover in the slope of the mountains upstream could
contribute to sediment formation. The consequence of sedimentation
not only makes considerable change in the morphology of the river
and the hydraulic characteristics but would also have a major
challenge for the operation and maintenance of the canal network
which depend on water flow to meet the stakeholder-s requirements.
For this reason mathematical modeling can be used to simulate the
effective factors on scouring, sediment transport and their settling
along the waterways. This is particularly important behind the
reservoirs which enable the operators to estimate the useful life of
these hydraulic structures. The aim of this paper is to simulate the
sedimentation and erosion in the eastern and western water intake
structures of the Dez Diversion weir using GSTARS-3 software. This
is done to estimate the sedimentation and investigate the ways in
which to optimize the process and minimize the operational
problems. Results indicated that the at the furthest point upstream of
the diversion weir, the coarser sediment grains tended to settle. The
reason for this is the construction of the phantom bridge and the
outstanding rocks just upstream of the structure. The construction of
these along the river course has reduced the momentum energy
require to push the sediment loads and make it possible for them to
settle wherever the river regime allows it. Results further indicated a
trend for the sediment size in such a way that as the focus of study
shifts downstream the size of grains get smaller and vice versa. It
was also found that the finding of the GSTARS-3 had a close
proximity with the sets of the observed data. This suggests that the
software is a powerful analytical tool which can be applied in the
river engineering project with a minimum of costs and relatively
accurate results.
Abstract: The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Abstract: Discretization of spatial derivatives is an important
issue in meshfree methods especially when the derivative terms
contain non-linear coefficients. In this paper, various methods used
for discretization of second-order spatial derivatives are investigated
in the context of Smoothed Particle Hydrodynamics. Three popular
forms (i.e. "double summation", "second-order kernel derivation",
and "difference scheme") are studied using one-dimensional unsteady
heat conduction equation. To assess these schemes, transient response
to a step function initial condition is considered. Due to parabolic
nature of the heat equation, one can expect smooth and monotone
solutions. It is shown, however in this paper, that regardless of
the type of kernel function used and the size of smoothing radius,
the double summation discretization form leads to non-physical
oscillations which persist in the solution. Also, results show that when
a second-order kernel derivative is used, a high-order kernel function
shall be employed in such a way that the distance of inflection
point from origin in the kernel function be less than the nearest
particle distance. Otherwise, solutions may exhibit oscillations near
discontinuities unlike the "difference scheme" which unconditionally
produces monotone results.
Abstract: An attempt has been made to investigate the
machinability of zirconia toughened alumina (ZTA) inserts while
turning AISI 4340 steel. The insert was prepared by powder
metallurgy process route and the machining experiments were
performed based on Response Surface Methodology (RSM) design
called Central Composite Design (CCD). The mathematical model of
flank wear, cutting force and surface roughness have been developed
using second order regression analysis. The adequacy of model has
been carried out based on Analysis of variance (ANOVA) techniques.
It can be concluded that cutting speed and feed rate are the two most
influential factor for flank wear and cutting force prediction. For
surface roughness determination, the cutting speed & depth of cut
both have significant contribution. Key parameters effect on each
response has also been presented in graphical contours for choosing
the operating parameter preciously. 83% desirability level has been
achieved using this optimized condition.
Abstract: In this paper, the periodic surveillance scheme has
been proposed for any convex region using mobile wireless sensor
nodes. A sensor network typically consists of fixed number of
sensor nodes which report the measurements of sensed data such as
temperature, pressure, humidity, etc., of its immediate proximity
(the area within its sensing range). For the purpose of sensing an
area of interest, there are adequate number of fixed sensor
nodes required to cover the entire region of interest. It implies
that the number of fixed sensor nodes required to cover a given
area will depend on the sensing range of the sensor as well as
deployment strategies employed. It is assumed that the sensors to
be mobile within the region of surveillance, can be mounted on
moving bodies like robots or vehicle. Therefore, in our
scheme, the surveillance time period determines the number of
sensor nodes required to be deployed in the region of interest.
The proposed scheme comprises of three algorithms namely:
Hexagonalization, Clustering, and Scheduling, The first algorithm
partitions the coverage area into fixed sized hexagons that
approximate the sensing range (cell) of individual sensor node.
The clustering algorithm groups the cells into clusters, each of
which will be covered by a single sensor node. The later
determines a schedule for each sensor to serve its respective cluster.
Each sensor node traverses all the cells belonging to the cluster
assigned to it by oscillating between the first and the last cell for
the duration of its life time. Simulation results show that our
scheme provides full coverage within a given period of time using
few sensors with minimum movement, less power consumption,
and relatively less infrastructure cost.
Abstract: Design and implementation of a novel B-ACOSD CFAR algorithm is presented in this paper. It is proposed for detecting radar target in log-normal distribution environment. The BACOSD detector is capable to detect automatically the number interference target in the reference cells and detect the real target by an adaptive threshold. The detector is implemented as a System on Chip on FPGA Altera Stratix II using parallelism and pipelining technique. For a reference window of length 16 cells, the experimental results showed that the processor works properly with a processing speed up to 115.13MHz and processing time0.29 ┬Ás, thus meets real-time requirement for a typical radar system.
Abstract: This paper presents a solution for the behavioural
animation of autonomous virtual agent navigation in virtual environments.
We focus on using Dempster-Shafer-s Theory of Evidence
in developing visual sensor for virtual agent. The role of the visual
sensor is to capture the information about the virtual environment
or identifie which part of an obstacle can be seen from the position
of the virtual agent. This information is require for vitual agent to
coordinate navigation in virtual environment. The virual agent uses
fuzzy controller as a navigation system and Fuzzy α - level for
the action selection method. The result clearly demonstrates the path
produced is reasonably smooth even though there is some sharp turn
and also still not diverted too far from the potential shortest path.
This had indicated the benefit of our method, where more reliable
and accurate paths produced during navigation task.
Abstract: It is important to remove manganese from water
because of its effects on human and the environment. Human
activities are one of the biggest contributors for excessive manganese
concentration in the environment. The proposed method to remove
manganese in aqueous solution by using adsorption as in carbon
nanotubes (CNT) at different parameters: The parameters are CNT
dosage, pH, agitation speed and contact time. Different pHs are pH
6.0, pH 6.5, pH 7.0, pH 7.5 and pH 8.0, CNT dosages are 5mg,
6.25mg, 7.5mg, 8.75mg or 10mg, contact time are 10 min, 32.5 min,
55 min, 87.5 min and 120 min while the agitation speeds are 100rpm,
150rpm, 200rpm, 250rpm and 300rpm. The parameters chosen for
experiments are based on experimental design done by using Central
Composite Design, Design Expert 6.0 with 4 parameters, 5 levels and
2 replications. Based on the results, condition set at pH 7.0, agitation
speed of 300 rpm, 7.5mg and contact time 55 minutes gives the
highest removal with 75.5%. From ANOVA analysis in Design
Expert 6.0, the residual concentration will be very much affected by
pH and CNT dosage. Initial manganese concentration is 1.2mg/L
while the lowest residual concentration achieved is 0.294mg/L,
which almost satisfy DOE Malaysia Standard B requirement.
Therefore, further experiments must be done to remove manganese
from model water to the required standard (0.2 mg/L) with the initial
concentration set to 0.294 mg/L.
Abstract: The Application of e-health solutions has brought superb advancements in the health care industry. E-health solutions have already been embraced in the industrialized countries. In an effort to catch up with the growth, the developing countries have strived to revolutionize the healthcare industry by use of Information technology in different ways. Based on a technology assessment carried out in Kenya – one of the developing countries – and using multiple case studies in Nyanza Province, this work focuses on an investigation on how five rural hospitals are adapting to the technology shift. The issues examined include the ICT infrastructure and e-health technologies in place, the knowledge of participants in terms of benefits gained through the use of ICT and the challenges posing barriers to the use of ICT technologies in these hospitals. The results reveal that the ICT infrastructure in place is inadequate for e-health implementations as a result to various challenges that exist. Consequently, suggestions on how to tackle the various challenges have been addressed in this paper.
Abstract: Conventional WBL is effective for meaningful student, because rote student learn by repeating without thinking or trying to understand. It is impossible to have full benefit from conventional WBL. Understanding of rote student-s intention and what influences it becomes important. Poorly designed user interface will discourage rote student-s cultivation and intention to use WBL. Thus, user interface design is an important factor especially when WBL is used as comprehensive replacement of conventional teaching. This research proposes the influencing factors that can enhance student-s intention to use the system. The enhanced TAM is used for evaluating the proposed factors. The research result points out that factors influencing rote student-s intention are Perceived Usefulness of Homepage Content Structure, Perceived User Friendly Interface, Perceived Hedonic Component, and Perceived (homepage) Visual Attractiveness.
Abstract: Mixed convection in two-dimensional shallow rectangular enclosure is considered. The top hot wall moves with constant velocity while the cold bottom wall has no motion. Simulations are performed for Richardson number ranging from Ri = 0.001 to 100 and for Reynolds number keeping fixed at Re = 408.21. Under these conditions cavity encompasses three regimes: dominating forced, mixed and free convection flow. The Prandtl number is set to 6 and the effects of cavity inclination on the flow and heat transfer are studied for different Richardson number. With increasing the inclination angle, interesting behavior of the flow and thermal fields are observed. The streamlines and isotherm plots and the variation of the Nusselt numbers on the hot wall are presented. The average Nusselt number is found to increase with cavity inclination for Ri ³ 1 . Also it is shown that the average Nusselt number changes mildly with the cavity inclination in the dominant forced convection regime but it increases considerably in the regime with dominant natural convection.
Abstract: In this paper, an automatic detecting algorithm for
QRS complex detecting was applied for analyzing ECG recordings
and five criteria for dangerous arrhythmia diagnosing are applied for a
protocol type of automatic arrhythmia diagnosing system. The
automatic detecting algorithm applied in this paper detected the
distribution of QRS complexes in ECG recordings and related
information, such as heart rate and RR interval. In this investigation,
twenty sampled ECG recordings of patients with different pathologic
conditions were collected for off-line analysis. A combinative
application of four digital filters for bettering ECG signals and
promoting detecting rate for QRS complex was proposed as
pre-processing. Both of hardware filters and digital filters were
applied to eliminate different types of noises mixed with ECG
recordings. Then, an automatic detecting algorithm of QRS complex
was applied for verifying the distribution of QRS complex. Finally,
the quantitative clinic criteria for diagnosing arrhythmia were
programmed in a practical application for automatic arrhythmia
diagnosing as a post-processor. The results of diagnoses by automatic
dangerous arrhythmia diagnosing were compared with the results of
off-line diagnoses by experienced clinic physicians. The results of
comparison showed the application of automatic dangerous
arrhythmia diagnosis performed a matching rate of 95% compared
with an experienced physician-s diagnoses.
Abstract: When the failure function is monotone, some monotonic reliability methods are used to gratefully simplify and facilitate the reliability computations. However, these methods often work in a transformed iso-probabilistic space. To this end, a monotonic simulator or transformation is needed in order that the transformed failure function is still monotone. This note proves at first that the output distribution of failure function is invariant under the transformation. And then it presents some conditions under which the transformed function is still monotone in the newly obtained space. These concern the copulas and the dependence concepts. In many engineering applications, the Gaussian copulas are often used to approximate the real word copulas while the available information on the random variables is limited to the set of marginal distributions and the covariances. So this note catches an importance on the conditional monotonicity of the often used transformation from an independent random vector into a dependent random vector with Gaussian copulas.
Abstract: The objective of current study is to investigate the
differences of winning and losing teams in terms of goal scoring and
passing sequences. Total of 31 matches from UEFA-EURO 2012
were analyzed and 5 matches were excluded from analysis due to
matches end up drawn. There are two groups of variable used in the
study which is; i. the goal scoring variable and: ii. passing sequences
variable. Data were analyzed using Wilcoxon matched pair rank test
with significant value set at p < 0.05. Current study found the timing
of goal scored was significantly higher for winning team at 1st half
(Z=-3.416, p=.001) and 2nd half (Z=-3.252, p=.001). The scoring
frequency was also found to be increase as time progressed and the
last 15 minutes of the game was the time interval the most goals
scored. The indicators that were significantly differences between
winning and losing team were the goal scored (Z=-4.578, p=.000),
the head (Z=-2.500, p=.012), the right foot (Z=-3.788,p=.000),
corner (Z=-.2.126,p=.033), open play (Z=-3.744,p=.000), inside the
penalty box (Z=-4.174, p=.000) , attackers (Z=-2.976, p=.003) and
also the midfielders (Z=-3.400, p=.001). Regarding the passing
sequences, there are significance difference between both teams in
short passing sequences (Z=-.4.141, p=.000). While for the long
passing, there were no significance difference (Z=-.1.795, p=.073).
The data gathered in present study can be used by the coaches to
construct detailed training program based on their objectives.
Abstract: BRI-STARS (BRIdge Stream Tube model for Alluvial
River Simulation) program was used to investigate the scour depth around bridge piers in some of the major river systems in Iran. Model
calibration was performed by collecting different field data. Field data are cataloged on three categories, first group of bridges that
their rivers bed are formed by fine material, second group of bridges
that their rivers bed are formed by sand material, and finally bridges that their rivers bed are formed by gravel or cobble materials.
Verification was performed with some field data in Fars Province. Results show that for wide piers, computed scour depth is more than
measured one. In gravel bed streams, computed scour depth is greater
than measured scour depth, the reason is due to formation of armor layer on bed of channel. Once this layer is eroded, the computed
scour depth is close to the measured one.
Abstract: The three-species food web model proposed and investigated by Gakkhar and Naji is known to have chaotic behaviour for a choice of parameters. An attempt has been made to synchronize the chaos in the model using bidirectional coupling. Numerical simulations are presented to demonstrate the effectiveness and feasibility of the analytical results. Numerical results show that for higher value of coupling strength, chaotic synchronization is achieved. Chaos can be controlled to achieve stable synchronization in natural systems.
Abstract: The article describes problems of city centers with regard to possibilities of their delimitation in a GIS environment. First the definitions and delimitations of a city centre which are in use are mentioned, furthermore a chosen case study (the historical centre of Olomouc city in the Czech Republic) is employed to describe the methods of delimitation in use. In addition to describing the current state, the article also deals with possibilities of delimitation of a city centre in GIS environment by means of several chosen approaches. The authors describe, compare and discuss the chosen methods and assess the achieved results and also applicability of the designed methods for other cities.
Abstract: Laser Doppler flowmetry is a modern method of noninvasive
microcirculation investigation. The aim of our study was to
use this method in the examination of patients with secondary
lymphedema of the lower extremities and obliterating atherosclerosis
of lower extremities. In the analysis of the amplitude-frequency
spectrum of secondary lymphedema patients we have identified
remarkable changes. To describe the changes we used a special
amplitude rate. In both of patients groups this rate was significally
(p
Abstract: The hydrothermal behavior of a bed consisting of
magnetic and shale oil particle admixtures under the effect of a
transverse magnetic field is investigated. The phase diagram, bed
void fraction are studied under wide range of the operating
conditions i.e., gas velocity, magnetic field intensity and fraction of
the magnetic particles. It is found that the range of the stabilized
regime is reduced as the magnetic fraction decreases. In addition, the
bed voidage at the onset of fluidization decreases as the magnetic
fraction decreases. On the other hand, Nusselt number and
consequently the heat transfer coefficient is found to increase as the
magnetic fraction decreases. An empirical equation is investigated to
relate the effect of the gas velocity, magnetic field intensity and
fraction of the magnetic particles on the heat transfer behavior in the
bed.
Abstract: In this paper, we carry over some of the results which
are valid on a certain class of Moufang-Klingenberg planes M(A)
coordinatized by an local alternative ring A := A(ε) = A+Aε of
dual numbers to finite projective Klingenberg plane M(A) obtained
by taking local ring Zq (where prime power q = pk) instead of A.
So, we show that the collineation group of M(A) acts transitively
on 4-gons, and that any 6-figure corresponds to only one inversible
m ∈ A.