Abstract: Nowadays, precipitation prediction is required for proper planning and management of water resources. Prediction with neural network models has received increasing interest in various research and application domains. However, it is difficult to determine the best neural network architecture for prediction since it is not immediately obvious how many input or hidden nodes are used in the model. In this paper, neural network model is used as a forecasting tool. The major aim is to evaluate a suitable neural network model for monthly precipitation mapping of Myanmar. Using 3-layerd neural network models, 100 cases are tested by changing the number of input and hidden nodes from 1 to 10 nodes, respectively, and only one outputnode used. The optimum model with the suitable number of nodes is selected in accordance with the minimum forecast error. In measuring network performance using Root Mean Square Error (RMSE), experimental results significantly show that 3 inputs-10 hiddens-1 output architecture model gives the best prediction result for monthly precipitation in Myanmar.
Abstract: The objective of this research intends to create a suitable model of distance training for community leaders in the upper northeastern region of Thailand. The implementation of the research process is divided into four steps: The first step is to analyze relevant documents. The second step deals with an interview in depth with experts. The third step is concerned with constructing a model. And the fourth step takes aim at model validation by expert assessments. The findings reveal the two important components for constructing an appropriate model of distance training for community leaders in the upper northeastern region. The first component consists of the context of technology management, e.g., principle, policy and goals. The second component can be viewed in two ways. Firstly, there are elements comprising input, process, output and feedback. Secondly, the sub-components include steps and process in training. The result of expert assessments informs that the researcher-s constructed model is consistent and suitable and overall the most appropriate.
Abstract: Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting. Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer. Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.
Abstract: the paper presents the optimization results for several
electrical machines dedicated for powered electric wheel-chairs. The
optimization, using the Hook-Jeeves algorithm, was employed based
on a design approach which takes into consideration the road
conditions. Also, through numerical simulations (based on finite
element method), the analytical approach was validated. The
optimization approach gave satisfactory results and the best suited
variant was chosen for the motorization of the wheel-chair.
Abstract: In the present paper the displacement-based nonconforming quadrilateral affine thin plate bending finite element ARPQ4 is presented, derived directly from non-conforming quadrilateral thin plate bending finite element RPQ4 proposed by Wanji and Cheung [19]. It is found, however, that element RPQ4 is only conditionally unisolvent. The new element is shown to be inherently unisolvent. This convenient property results in the element ARPQ4 being more robust and thus better suited for computations than its predecessor. The convergence is proved and the rate of convergence estimated. The mathematically rigorous proof of convergence presented in the paper is based on Stummel-s generalized patch test and the consideration of the element approximability condition, which are both necessary and sufficient for convergence.
Abstract: This paper addresses control of commutation of switched reluctance (SR) motor without the use of a physical position detector. Rotor position detection schemes for SR motor based on magnetisation characteristics of the motor use normal excitation or applied current /voltage pulses. The resulting schemes are referred to as passive or active methods respectively. The research effort is in realizing an economical sensorless SR rotor position detector that is accurate, reliable and robust to suit a particular application. An effective and reliable means of generating commutation signals of an SR motor based on inductance profile of its stator windings determined using active probing technique is presented. The scheme has been validated online using a 4-phase 8/6 SR motor and an 8-bit processor.
Abstract: This study was aimed to study the probability about
the production of fiberboard made of durian rind through latex with
phenolic resin as binding agent. The durian rind underwent the
boiling process with NaOH [7], [8] and then the fiber from durian
rind was formed into fiberboard through heat press. This means that
durian rind could be used as replacement for plywood in plywood
industry by using durian fiber as composite material with adhesive
substance. This research would study the probability about the
production of fiberboard made of durian rind through latex with
phenolic resin as binding agent. At first, durian rind was split,
exposed to light, boiled and steamed in order to gain durian fiber.
Then, fiberboard was tested with the density of 600 Kg/m3 and 800
Kg/m3. in order to find a suitable ratio of durian fiber and latex.
Afterwards, mechanical properties were tested according to the
standards of ASTM and JIS A5905-1994. After the suitable ratio was
known, the test results would be compared with medium density
fiberboard (MDF) and other related research studies. According to
the results, fiberboard made of durian rind through latex with
phenolic resin at the density of 800 Kg/m3 at ratio of 1:1, the
moisture was measured to be 5.05% with specific gravity (ASTM D
2395-07a) of 0.81, density (JIS A 5905-1994) of 0.88 g/m3, tensile
strength, hardness (ASTM D2240), flexibility or elongation at break
yielded similar values as the ones by medium density fiberboard
(MDF).
Abstract: One very interesting field of research in Pattern Recognition that has gained much attention in recent times is Gesture Recognition. In this paper, we consider a form of dynamic hand gestures that are characterized by total movement of the hand (arm) in space. For these types of gestures, the shape of the hand (palm) during gesturing does not bear any significance. In our work, we propose a model-based method for tracking hand motion in space, thereby estimating the hand motion trajectory. We employ the dynamic time warping (DTW) algorithm for time alignment and normalization of spatio-temporal variations that exist among samples belonging to the same gesture class. During training, one template trajectory and one prototype feature vector are generated for every gesture class. Features used in our work include some static and dynamic motion trajectory features. Recognition is accomplished in two stages. In the first stage, all unlikely gesture classes are eliminated by comparing the input gesture trajectory to all the template trajectories. In the next stage, feature vector extracted from the input gesture is compared to all the class prototype feature vectors using a distance classifier. Experimental results demonstrate that our proposed trajectory estimator and classifier is suitable for Human Computer Interaction (HCI) platform.
Abstract: An accident is an unexpected and unplanned situation
that happens and affects human in a negative outcome. The accident
can cause an injury to a human biological organism. Thus, the
provision of initial care for an illness or injury is very important
move to prepare the patients/victims before sending to the doctor. In
this paper, a First Aid Application is developed to give some
directions for preliminary taking care of patient/victim via Android
mobile device. Also, the navigation function using Google Maps API
is implemented in this paper for searching a suitable path to the
nearest hospital. Therefore, in the emergency case, this function can
be activated and navigate patients/victims to the hospital with the
shortest path.
Abstract: This paper investigates the problem of exponential stability for a class of uncertain discrete-time stochastic neural network with time-varying delays. By constructing a suitable Lyapunov-Krasovskii functional, combining the stochastic stability theory, the free-weighting matrix method, a delay-dependent exponential stability criteria is obtained in term of LMIs. Compared with some previous results, the new conditions obtain in this paper are less conservative. Finally, two numerical examples are exploited to show the usefulness of the results derived.
Abstract: The approach based on the wavelet transform has
been widely used for image denoising due to its multi-resolution
nature, its ability to produce high levels of noise reduction and the
low level of distortion introduced. However, by removing noise, high
frequency components belonging to edges are also removed, which
leads to blurring the signal features. This paper proposes a new
method of image noise reduction based on local variance and edge
analysis. The analysis is performed by dividing an image into 32 x 32
pixel blocks, and transforming the data into wavelet domain. Fast
lifting wavelet spatial-frequency decomposition and reconstruction is
developed with the advantages of being computationally efficient and
boundary effects minimized. The adaptive thresholding by local
variance estimation and edge strength measurement can effectively
reduce image noise while preserve the features of the original image
corresponding to the boundaries of the objects. Experimental results
demonstrate that the method performs well for images contaminated
by natural and artificial noise, and is suitable to be adapted for
different class of images and type of noises. The proposed algorithm
provides a potential solution with parallel computation for real time
or embedded system application.
Abstract: Water Sensitive Urban Design (WSUD) features are
increasingly used to treat and manage polluted stormwater runoff in urbanised areas. It is important to monitor and evaluate the effectiveness of the infrastructure in achieving their intended performance targets after constructing and operating these features
overtime. The paper presents the various methods of analysis used to
assess the effectiveness of the in-situ WSUD features, such as: onsite visual inspections during operational and non operational periods, maintenance audits and periodic water quality testing. The results will contribute to a better understanding of the operational and
maintenance needs of in-situ WSUD features and assist in providing recommendations to better manage life cycle performance.
Abstract: Scarcity of resources for biodiversity conservation gives rise to the need of strategic investment with priorities given to the cost of conservation. While the literature provides abundant methodological options for biodiversity conservation; estimating true cost of conservation remains abstract and simplistic, without recognising dynamic nature of the cost. Some recent works demonstrate the prominence of economic theory to inform biodiversity decisions, particularly on the costs and benefits of biodiversity however, the integration of the concept of true cost into biodiversity actions and planning are very slow to come by, and specially on a farm level. Conservation planning studies often use area as a proxy for costs neglecting different land values as well as protected areas. These literature consider only heterogeneous benefits while land costs are considered homogenous. Analysis with the assumption of cost homogeneity results in biased estimation; since not only it doesn’t address the true total cost of biodiversity actions and plans, but also it fails to screen out lands that are more (or less) expensive and/or difficult (or more suitable) for biodiversity conservation purposes, hindering validity and comparability of the results. Economies of scope” is one of the other most neglected aspects in conservation literature. The concept of economies of scope introduces the existence of cost complementarities within a multiple output production system and it suggests a lower cost during the concurrent production of multiple outputs by a given farm. If there are, indeed, economies of scope then simplistic representation of costs will tend to overestimate the true cost of conservation leading to suboptimal outcomes. The aim of this paper, therefore, is to provide first road review of the various theoretical ways in which economies of scope are likely to occur of how they might occur in conservation. Consequently, the paper addresses gaps that have to be filled in future analysis.
Abstract: In this study three commercial semiconductor devices
were characterized in the laboratory for computed tomography
dosimetry: one photodiode and two phototransistors. It was evaluated
four responses to the irradiation: dose linearity, energy dependence,
angular dependence and loss of sensitivity after X ray exposure. The
results showed that the three devices have proportional response with
the air kerma; the energy dependence displayed for each device
suggests that some calibration factors would be applied for each one;
the angular dependence showed a similar pattern among the three
electronic components. In respect to the fourth parameter analyzed,
one phototransistor has the highest sensitivity however it also showed
the greatest loss of sensitivity with the accumulated dose. The
photodiode was the device with the smaller sensitivity to radiation,
on the other hand, the loss of sensitivity after irradiation is negligible.
Since high accuracy is a desired feature for a dosimeter, the
photodiode can be the most suitable of the three devices for
dosimetry in tomography. The phototransistors can also be used for
CT dosimetry, however it would be necessary a correction factor due
to loss of sensitivity with accumulated dose.
Abstract: The nanotechnology based on epitaxial systems
includes single or arranged misfit dislocations. In general, whatever
is the type of dislocation or the geometry of the array formed by the
dislocations; it is important for experimental studies to know exactly
the stress distribution for which there is no analytical expression [1,
2]. This work, using a numerical analysis, deals with relaxation of
epitaxial layers having at their interface a periodic network of edge
misfit dislocations. The stress distribution is estimated by using
isotropic elasticity. The results show that the thickness of the two
sheets is a crucial parameter in the stress distributions and then in the
profile of the two sheets.
A comparative study between the case of single dislocation and
the case of parallel network shows that the layers relaxed better when
the interface is covered by a parallel arrangement of misfit.
Consequently, a single dislocation at the interface produces an
important stress field which can be reduced by inserting a parallel
network of dislocations with suitable periodicity.
Abstract: For investigations of electromagnetic field
distributions in biological structures by Finite Element Method
(FEM), a method for automatic 3D model building of human
anatomical objects is developed. Models are made by meshed
structures and specific electromagnetic material properties for each
tissue type. Mesh is built according to specific FEM criteria for
achieving good solution accuracy. Several FEM models of
anatomical objects are built. Formulation using magnetic vector
potential and scalar electric potential (A-V, A) is used for modeling
of electromagnetic fields in human tissue objects. The developed
models are suitable for investigations of electromagnetic field
distributions in human tissues exposed in external fields during
magnetic stimulation, defibrillation, impedance tomography etc.
Abstract: This paper derives some new sufficient conditions for
the stability of a class of neutral-type neural networks with discrete
time delays by employing a suitable Lyapunov functional. The
obtained conditions can be easily verified as they can be expressed
in terms of the network parameters only. It is shown that the results
presented in this paper for neutral-type delayed neural networks establish
a new set of stability criteria, and therefore can be considered
as the alternative results to the previously published literature results.
A numerical example is also given to demonstrate the applicability
of our proposed stability criterion.
Abstract: The human knee joint has a three dimensional
geometry with multiple body articulations that produce complex
mechanical responses under loads that occur in everyday life and
sports activities. To produce the necessary joint compliance and
stability for optimal daily function various menisci and ligaments are
present while muscle forces are used to this effect. Therefore,
knowledge of the complex mechanical interactions of these load
bearing structures is necessary when treatment of relevant diseases is
evaluated and assisting devices are designed.
Numerical tools such as finite element analysis are suitable for
modeling such joints in order to understand their physics. They have
been used in the current study to develop an accurate human knee
joint and model its mechanical behavior. To evaluate the efficacy of
this articulated model, static load cases were used for comparison
purposes with previous experimentally verified modeling works
drawn from literature.
Abstract: Cardiac pulse-related artifacts in the EEG recorded
simultaneously with fMRI are complex and highly variable. Their
effective removal is an unsolved problem. Our aim is to develop an
adaptive removal algorithm based on the matching pursuit (MP)
technique and to compare it to established methods using a visual
evoked potential (VEP). We recorded the VEP inside the static
magnetic field of an MR scanner (with artifacts) as well as in an
electrically shielded room (artifact free). The MP-based artifact
removal outperformed average artifact subtraction (AAS) and
optimal basis set removal (OBS) in terms of restoring the EEG field
map topography of the VEP. Subsequently, a dipole model was fitted
to the VEP under each condition using a realistic boundary element
head model. The source location of the VEP recorded inside the MR
scanner was closest to that of the artifact free VEP after cleaning
with the MP-based algorithm as well as with AAS. While none of the
tested algorithms offered complete removal, MP showed promising
results due to its ability to adapt to variations of latency, frequency
and amplitude of individual artifact occurrences while still utilizing a
common template.
Abstract: A dead leg is a typical subsea production system
component. CFD is required to model heat transfer within the dead
leg. Unfortunately its solution is time demanding and thus not
suitable for fast prediction or repeated simulations. Therefore there is
a need to create a thermal FEA model, mimicking the heat flows and
temperatures seen in CFD cool down simulations.
This paper describes the conventional way of tuning and a new
automated way using parametric model order reduction (PMOR)
together with an optimization algorithm. The tuned FE analyses
replicate the steady state CFD parameters within a maximum error in
heat flow of 6 % and 3 % using manual and PMOR method
respectively. During cool down, the relative error of the tuned FEA
models with respect to temperature is below 5% comparing to the
CFD. In addition, the PMOR method obtained the correct FEA setup
five times faster than the manually tuned FEA.