Abstract: Recently, Automatic Speech Recognition (ASR) systems were used to assist children in language acquisition as it has the ability to detect human speech signal. Despite the benefits offered by the ASR system, there is a lack of ASR systems for Malay-speaking children. One of the contributing factors for this is the lack of continuous speech database for the target users. Though cross-lingual adaptation is a common solution for developing ASR systems for under-resourced language, it is not viable for children as there are very limited speech databases as a source model. In this research, we propose a two-stage adaptation for the development of ASR system for Malay-speaking children using a very limited database. The two stage adaptation comprises the cross-lingual adaptation (first stage) and cross-age adaptation. For the first stage, a well-known speech database that is phonetically rich and balanced, is adapted to the medium-sized Malay adults using supervised MLLR. The second stage adaptation uses the speech acoustic model generated from the first adaptation, and the target database is a small-sized database of the target users. We have measured the performance of the proposed technique using word error rate, and then compare them with the conventional benchmark adaptation. The two stage adaptation proposed in this research has better recognition accuracy as compared to the benchmark adaptation in recognizing children’s speech.
Abstract: We present a family of data-reusing and affine
projection algorithms. For identification of a noisy linear finite
impulse response channel, a partial knowledge of a channel,
especially noise, can be used to improve the performance of
the adaptive filter. Motivated by this fact, the proposed scheme
incorporates an estimate of a knowledge of noise. A constraint, called
the adaptive noise constraint, estimates an unknown information of
noise. By imposing this constraint on a cost function of data-reusing
and affine projection algorithms, a cost function based on the adaptive
noise constraint and Lagrange multiplier is defined. Minimizing the
new cost function leads to the adaptive noise constrained (ANC)
data-reusing and affine projection algorithms. Experimental results
comparing the proposed schemes to standard data-reusing and affine
projection algorithms clearly indicate their superior performance.
Abstract: This paper presents a SAC-OCDMA code with zero cross correlation property to minimize the Multiple Access Interface (MAI) as New Zero Cross Correlation code (NZCC), which is found to be more scalable compared to the other existing SAC-OCDMA codes. This NZCC code is constructed using address segment and data segment. In this work, the proposed NZCC code is implemented in an optical system using the Opti-System software for the spectral amplitude coded optical code-division multiple-access (SAC-OCDMA) scheme. The main contribution of the proposed NZCC code is the zero cross correlation, which reduces both the MAI and PIIN noises. The proposed NZCC code reveals properties of minimum cross-correlation, flexibility in selecting the code parameters and supports a large number of users, combined with high data rate and longer fiber length. Simulation results reveal that the optical code division multiple access system based on the proposed NZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower Bit Error Rates (BER) and longer travelling distance without any signal quality degradation, as compared to the former existing SAC-OCDMA codes.
Abstract: This study investigates the relationship between external debt and military spending in case of India over the period of 1970–2012. In doing so, we have applied the structural break unit root tests to examine stationarity properties of the variables. The Auto-Regressive Distributed Lag (ARDL) bounds testing approach is used to test whether cointegration exists in presence of structural breaks stemming in the series. Our results indicate the cointegration among external debt, military spending, debt servicing, and economic growth. Moreover, military spending and debt servicing add in external debt. Economic growth helps in lowering external debt. The Vector Error Correction Model (VECM) analysis and Granger causality test reveal that military spending and economic growth cause external debt. The feedback effect also exists between external debt and debt servicing in case of India.
Abstract: Fuzzy regression models are useful for investigating
the relationship between explanatory variables and responses in fuzzy
environments. To overcome the deficiencies of previous models and
increase the explanatory power of fuzzy data, the graded mean
integration (GMI) representation is applied to determine
representative crisp regression coefficients. A fuzzy regression model
is constructed based on the modified dissemblance index (MDI),
which can precisely measure the actual total error. Compared with
previous studies based on the proposed MDI and distance criterion, the
results from commonly used test examples show that the proposed
fuzzy linear regression model has higher explanatory power and
forecasting accuracy.
Abstract: The purposes of this study are 1) to study the frequent
English writing errors of students registering the course: Reading and
Writing English for Academic Purposes II, and 2) to find out the
results of writing error correction by using coded indirect corrective
feedback and writing error treatments. Samples include 28 2nd year
English Major students, Faculty of Education, Suan Sunandha
Rajabhat University. Tool for experimental study includes the lesson
plan of the course; Reading and Writing English for Academic
Purposes II, and tool for data collection includes 4 writing tests of
short texts. The research findings disclose that frequent English
writing errors found in this course comprise 7 types of grammatical
errors, namely Fragment sentence, Subject-verb agreement, Wrong
form of verb tense, Singular or plural noun endings, Run-ons
sentence, Wrong form of verb pattern and Lack of parallel structure.
Moreover, it is found that the results of writing error correction by
using coded indirect corrective feedback and error treatment reveal
the overall reduction of the frequent English writing errors and the
increase of students’ achievement in the writing of short texts with
the significance at .05.
Abstract: Evolutionary Fuzzy PID Speed Controller for Permanent Magnet Synchronous Motor (PMSM) is developed to achieve the Speed control of PMSM in Closed Loop operation and to deal with the existence of transients. Consider a Fuzzy PID control design problem, based on common control Engineering Knowledge. If the transient error is big, that Good transient performance can be obtained by increasing the P and I gains and decreasing the D gains. To autotune the control parameters of the Fuzzy PID controller, the Evolutionary Algorithms (EA) are developed. EA based Fuzzy PID controller provides better speed control and guarantees the closed loop stability. The Evolutionary Fuzzy PID controller can be implemented in real time Applications without any concern about instabilities that leads to system failure or damage.
Abstract: We propose two affine projection algorithms (APA)
with variable regularization parameter. The proposed algorithms
dynamically update the regularization parameter that is fixed in the
conventional regularized APA (R-APA) using a gradient descent
based approach. By introducing the normalized gradient, the proposed
algorithms give birth to an efficient and a robust update scheme for
the regularization parameter. Through experiments we demonstrate
that the proposed algorithms outperform conventional R-APA in
terms of the convergence rate and the misadjustment error.
Abstract: We present a normalized LMS (NLMS) algorithm
with robust regularization. Unlike conventional NLMS with the
fixed regularization parameter, the proposed approach dynamically
updates the regularization parameter. By exploiting a gradient
descent direction, we derive a computationally efficient and robust
update scheme for the regularization parameter. In simulation, we
demonstrate the proposed algorithm outperforms conventional NLMS
algorithms in terms of convergence rate and misadjustment error.
Abstract: Accurate software reliability prediction not only enables developers to improve the quality of software but also provides useful information to help them for planning valuable resources. This paper examines the performance of three well-known data mining techniques (CART, TreeNet and Random Forest) for predicting software reliability. We evaluate and compare the performance of proposed models with Cascade Correlation Neural Network (CCNN) using sixteen empirical databases from the Data and Analysis Center for Software. The goal of our study is to help project managers to concentrate their testing efforts to minimize the software failures in order to improve the reliability of the software systems. Two performance measures, Normalized Root Mean Squared Error (NRMSE) and Mean Absolute Errors (MAE), illustrate that CART model is accurate than the models predicted using Random Forest, TreeNet and CCNN in all datasets used in our study. Finally, we conclude that such methods can help in reliability prediction using real-life failure datasets.
Abstract: Oil palm or Elaeis guineensis is considered as the golden crop in Malaysia. But oil palm industry in this country is now facing with the most devastating disease called as Ganoderma Basal Stem Rot disease. The objective of this paper is to analyze the economic loss due to this disease. There were three commercial oil palm sites selected for collecting the required data for economic analysis. Yield parameter used to measure the loss was the total weight of fresh fruit bunch in six months. The predictors include disease severity, change in disease severity, number of infected neighbor palms, age of palm, planting generation, topography, and first order interaction variables. The estimation model of yield loss was identified by using backward elimination based regression method. Diagnostic checking was conducted on the residual of the best yield loss model. The value of mean absolute percentage error (MAPE) was used to measure the forecast performance of the model. The best yield loss model was then used to estimate the economic loss by using the current monthly price of fresh fruit bunch at mill gate.
Abstract: This article presents a numerical analysis of a turbulent flow past DTMB 4119 marine propeller by the means of RANS approach; the propeller designed at David Taylor Model Basin in USA. The purpose of this study is to predict the hydrodynamic performance of the marine propeller, it aims also to compare the results obtained with the experiment carried out in open water tests; a periodical computational domain was created to reduce the unstructured mesh size generated. The standard kw turbulence model for the simulation is selected; the results were in a good agreement. Therefore, the errors were estimated respectively to 1.3% and 5.9% for KT and KQ.
Abstract: The objective of this research is to forecast the monthly exchange rate between Thai baht and the US dollar and to compare two forecasting methods. The methods are Box-Jenkins’ method and Holt’s method. Results show that the Box-Jenkins’ method is the most suitable method for the monthly Exchange Rate between Thai Baht and the US Dollar. The suitable forecasting model is ARIMA (1,1,0) without constant and the forecasting equation is Yt = Yt-1 + 0.3691 (Yt-1 - Yt-2) When Yt is the time series data at time t, respectively.
Abstract: Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.
Abstract: This research aimed to study the influences of a soot blowing operation and geometrical variables to the stress characteristic of water wall tubes located in soot blowing areas which caused the boilers of Mae Moh power plant to lose their generation hour. The research method is divided into 2 parts (a) measuring the strain on water wall tubes by using 3-element rosette strain gages orientation during a full capacity plant operation and in periods of soot blowing operations (b) creating a finite element model in order to calculate stresses on tubes and validating the model by using experimental data in a steady state plant operation. Then, the geometrical variables in the model were changed to study stresses on the tubes. The results revealed that the stress was not affected by the soot blowing process and the finite element model gave the results 1.24% errors from the experiment. The geometrical variables influenced the stress, with the most optimum tubes design in this research reduced the average stress from the present design 31.28%.
Abstract: In this paper we presented a new method for tracking
flying targets in color video sequences based on contour and kernel.
The aim of this work is to overcome the problem of losing target in
changing light, large displacement, changing speed, and occlusion.
The proposed method is made in three steps, estimate the target
location by particle filter, segmentation target region using neural
network and find the exact contours by greedy snake algorithm. In
the proposed method we have used both region and contour
information to create target candidate model and this model is
dynamically updated during tracking. To avoid the accumulation of
errors when updating, target region given to a perceptron neural
network to separate the target from background. Then its output used
for exact calculation of size and center of the target. Also it is used as
the initial contour for the greedy snake algorithm to find the exact
target's edge. The proposed algorithm has been tested on a database
which contains a lot of challenges such as high speed and agility of
aircrafts, background clutter, occlusions, camera movement, and so
on. The experimental results show that the use of neural network
increases the accuracy of tracking and segmentation.
Abstract: A human’s hand localization is revised by using radar cross section (RCS) measurements with a minimum root mean square (RMS) error matching algorithm on a touchless keypad mock-up model. RCS and frequency transfer function measurements are carried out in an indoor environment on the frequency ranged from 3.0 to 11.0 GHz to cover federal communications commission (FCC) standards. The touchless keypad model is tested in two different distances between the hand and the keypad. The initial distance of 19.50 cm is identical to the heights of transmitting (Tx) and receiving (Rx) antennas, while the second distance is 29.50 cm from the keypad. Moreover, the effects of Rx angles relative to the hand of human factor are considered. The RCS input parameters are compared with power loss parameters at each frequency. From the results, the performance of the RCS input parameters with the second distance, 29.50 cm at 3 GHz is better than the others.
Abstract: Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.
Abstract: We present a new subband adaptive filter (R-SAF)
which is robust against impulsive noise in system identification. To
address the vulnerability of adaptive filters based on the L2-norm
optimization criterion against impulsive noise, the R-SAF comes from
the L1-norm optimization criterion with a constraint on the energy
of the weight update. Minimizing L1-norm of the a posteriori error
in each subband with a constraint on minimum disturbance gives
rise to the robustness against the impulsive noise and the capable
convergence performance. Experimental results clearly demonstrate
that the proposed R-SAF outperforms the classical adaptive filtering
algorithms when impulsive noise as well as background noise exist.
Abstract: Hydrologic models are increasingly used as tools to
predict stormwater quantity and quality from urban catchments.
However, due to a range of practical issues, most models produce
gross errors in simulating complex hydraulic and hydrologic systems.
Difficulty in finding a robust approach for model calibration is one of
the main issues. Though automatic calibration techniques are
available, they are rarely used in common commercial hydraulic and
hydrologic modelling software e.g. MIKE URBAN. This is partly
due to the need for a large number of parameters and large datasets in
the calibration process. To overcome this practical issue, a
framework for automatic calibration of a hydrologic model was
developed in R platform and presented in this paper. The model was
developed based on the time-area conceptualization. Four calibration
parameters, including initial loss, reduction factor, time of
concentration and time-lag were considered as the primary set of
parameters. Using these parameters, automatic calibration was
performed using Approximate Bayesian Computation (ABC). ABC is
a simulation-based technique for performing Bayesian inference
when the likelihood is intractable or computationally expensive to
compute. To test the performance and usefulness, the technique was
used to simulate three small catchments in Gold Coast. For
comparison, simulation outcomes from the same three catchments
using commercial modelling software, MIKE URBAN were used.
The graphical comparison shows strong agreement of MIKE URBAN
result within the upper and lower 95% credible intervals of posterior
predictions as obtained via ABC. Statistical validation for posterior
predictions of runoff result using coefficient of determination (CD),
root mean square error (RMSE) and maximum error (ME) was found
reasonable for three study catchments. The main benefit of using
ABC over MIKE URBAN is that ABC provides a posterior
distribution for runoff flow prediction, and therefore associated
uncertainty in predictions can be obtained. In contrast, MIKE
URBAN just provides a point estimate. Based on the results of the
analysis, it appears as though ABC the developed framework
performs well for automatic calibration.