Abstract: Audio-lingual Method (ALM) is a teaching approach
that is claimed that ineffective for teaching second/foreign languages.
Because some linguists and second/foreign language teachers believe
that ALM is a rote learning style. However, this study is done on a
belief that ALM will be able to solve Thais’ English speaking
problem. This paper aims to report the findings on teaching English
speaking to adult learners with an “adapted ALM”, one distinction of
which is to use Thai as the medium language of instruction.
The participants are consisted of 9 adult learners. They were
allowed to speak English more freely using both the materials
presented in the class and their background knowledge of English. At
the end of the course, they spoke English more fluently, more
confidently, to the extent that they applied what they learnt both in
and outside the class.
Abstract: Estimation of a proportion has many applications in
economics and social studies. A common application is the estimation
of the low income proportion, which gives the proportion of people
classified as poor into a population. In this paper, we present this
poverty indicator and propose to use the logistic regression estimator
for the problem of estimating the low income proportion. Various
sampling designs are presented. Assuming a real data set obtained
from the European Survey on Income and Living Conditions, Monte
Carlo simulation studies are carried out to analyze the empirical
performance of the logistic regression estimator under the various
sampling designs considered in this paper. Results derived from
Monte Carlo simulation studies indicate that the logistic regression
estimator can be more accurate than the customary estimator under
the various sampling designs considered in this paper. The stratified
sampling design can also provide more accurate results.
Abstract: This paper aims at introducing finite automata theory,
the different ways to describe regular languages and create a program
to implement the subset construction algorithms to convert
nondeterministic finite automata (NFA) to deterministic finite
automata (DFA). This program is written in c++ programming
language. The program reads FA 5tuples from text file and then
classifies it into either DFA or NFA. For DFA, the program will read
the string w and decide whether it is acceptable or not. If accepted, the
program will save the tracking path and point it out. On the other hand,
when the automation is NFA, the program will change the Automation
to DFA so that it is easy to track and it can decide whether the w exists
in the regular language or not.
Abstract: This research paper aims to identify, analyze and rank
factors affecting labor productivity in Spain with respect to their
relative importance. Using a selected set of 35 factors, a structured
questionnaire survey was utilized as the method to collect data from
companies. Target population is comprised by a random
representative sample of practitioners related with the Spanish
construction industry. Findings reveal the top five ranked factors are
as follows: (1) shortage or late supply of materials; (2) clarity of the
drawings and project documents; (3) clear and daily task assignment;
(4) tools or equipment shortages; (5) level of skill and experience of
laborers. Additionally, this research also pretends to provide simple
and comprehensive recommendations so that they could be
implemented by construction managers for an effective management
of construction labor forces.
Abstract: The paper presents the results of clusterization by
Kohonen self-organizing maps (SOM) applied for analysis of array of
Raman spectra of multi-component solutions of inorganic salts, for
determination of types of salts present in the solution. It is
demonstrated that use of SOM is a promising method for solution of
clusterization and classification problems in spectroscopy of multicomponent
objects, as attributing a pattern to some cluster may be
used for recognition of component composition of the object.
Abstract: Frequent pattern mining is the process of finding a
pattern (a set of items, subsequences, substructures, etc.) that occurs
frequently in a data set. It was proposed in the context of frequent
itemsets and association rule mining. Frequent pattern mining is used
to find inherent regularities in data. What products were often
purchased together? Its applications include basket data analysis,
cross-marketing, catalog design, sale campaign analysis, Web log
(click stream) analysis, and DNA sequence analysis. However, one of
the bottlenecks of frequent itemset mining is that as the data increase
the amount of time and resources required to mining the data
increases at an exponential rate. In this investigation a new algorithm
is proposed which can be uses as a pre-processor for frequent itemset
mining. FASTER (FeAture SelecTion using Entropy and Rough sets)
is a hybrid pre-processor algorithm which utilizes entropy and roughsets
to carry out record reduction and feature (attribute) selection
respectively. FASTER for frequent itemset mining can produce a
speed up of 3.1 times when compared to original algorithm while
maintaining an accuracy of 71%.
Abstract: The relationship between eigenstructure (eigenvalues
and eigenvectors) and latent structure (latent roots and latent vectors)
is established. In control theory eigenstructure is associated with
the state space description of a dynamic multi-variable system and
a latent structure is associated with its matrix fraction description.
Beginning with block controller and block observer state space forms
and moving on to any general state space form, we develop the
identities that relate eigenvectors and latent vectors in either direction.
Numerical examples illustrate this result. A brief discussion of the
potential of these identities in linear control system design follows.
Additionally, we present a consequent result: a quick and easy
method to solve the polynomial eigenvalue problem for regular matrix
polynomials.
Abstract: One of the most important tasks in urban remote
sensing is the detection of impervious surfaces (IS), such as roofs and
roads. However, detection of IS in heterogeneous areas still remains
one of the most challenging tasks. In this study, detection of concrete
roof using an object-based approach was proposed. A new rule-based
classification was developed to detect concrete roof tile. This
proposed rule-based classification was applied to WorldView-2
image and results showed that the proposed rule has good potential to
predict concrete roof material from WorldView-2 images, with 85%
accuracy.
Abstract: Boron-gypsum is a waste which occurs in the boric
acid production process. In this study, the boron content of this waste
is evaluated for the use in synthesis of magnesium borates and such
evaluation of this kind of waste is useful more than storage or
disposal. Magnesium borates, which are a sub-class of boron
minerals, are useful additive materials for the industries due to their
remarkable thermal and mechanical properties. Magnesium borates
were obtained hydrothermally at different temperatures. Novelty of
this study is the search of the solution density effects to magnesium
borate synthesis process for the increasing the possibility of borongypsum
usage as a raw material. After the synthesis process, products
are subjected to XRD and FT-IR to identify and characterize their
crystal structure, respectively.
Abstract: A generalized vortex lattice method for complex
lifting surfaces with flap and aileron deflection is formulated. The
method is not restricted by the linearized theory assumption and
accounts for all standard geometric lifting surface parameters:
camber, taper, sweep, washout, dihedral, in addition to flap and
aileron deflection. Thickness is not accounted for since the physical
lifting body is replaced by a lattice of panels located on the mean
camber surface. This panel lattice setup and the treatment of different
wake geometries is what distinguish the present work form the
overwhelming majority of previous solutions based on the vortex
lattice method. A MATLAB code implementing the proposed
formulation is developed and validated by comparing our results to
existing experimental and numerical ones and good agreement is
demonstrated. It is then used to study the accuracy of the widely used
classical vortex-lattice method. It is shown that the classical approach
gives good agreement in the clean configuration but is off by as much
as 30% when a flap or aileron deflection of 30° is imposed. This
discrepancy is mainly due the linearized theory assumption
associated with the conventional method. A comparison of the effect
of four different wake geometries on the values of aerodynamic
coefficients was also carried out and it is found that the choice of the
wake shape had very little effect on the results.
Abstract: This paper clarifies the role of ICT capital in economic
growth. Albeit ICT remarkably contributes to economic growth, there
are few studies on ICT capital in ICT sector from theoretical point of
view. In this paper, production function of ICT which is used as input
of intermediate good in final good and ICT sectors is incorporated
into our model. In this setting, we analyze the role of ICT on balance
growth path and show the possibility of general equilibrium solutions
for this model. Through the simulation of the equilibrium solutions,
we find that when ICT impacts on economy and economic growth
increases, it is necessary that increases of efficiency at ICT sector and
of accumulation of non-ICT and ICT capitals occur simultaneously.
Abstract: One of the major goals of Spoken Dialog Systems
(SDS) is to understand what the user utters.
In the SDS domain, the Spoken Language Understanding (SLU)
Module classifies user utterances by means of a pre-definite
conceptual knowledge. The SLU module is able to recognize only the
meaning previously included in its knowledge base. Due the vastity
of that knowledge, the information storing is a very expensive
process.
Updating and managing the knowledge base are time-consuming
and error-prone processes because of the rapidly growing number of
entities like proper nouns and domain-specific nouns. This paper
proposes a solution to the problem of Name Entity Recognition
(NER) applied to a SDS domain. The proposed solution attempts to
automatically recognize the meaning associated with an utterance by
using the PANKOW (Pattern based Annotation through Knowledge
On the Web) method at runtime.
The method being proposed extracts information from the Web to
increase the SLU knowledge module and reduces the development
effort. In particular, the Google Search Engine is used to extract
information from the Facebook social network.
Abstract: This paper presents the voltage problem location
classification using performance of Least Squares Support Vector
Machine (LS-SVM) and Learning Vector Quantization (LVQ) in
electrical power system for proper voltage problem location
implemented by IEEE 39 bus New- England. The data was collected
from the time domain simulation by using Power System Analysis
Toolbox (PSAT). Outputs from simulation data such as voltage, phase
angle, real power and reactive power were taken as input to estimate
voltage stability at particular buses based on Power Transfer Stability
Index (PTSI).The simulation data was carried out on the IEEE 39 bus
test system by considering load bus increased on the system. To verify
of the proposed LS-SVM its performance was compared to Learning
Vector Quantization (LVQ). The results showed that LS-SVM is faster
and better as compared to LVQ. The results also demonstrated that the
LS-SVM was estimated by 0% misclassification whereas LVQ had
7.69% misclassification.
Abstract: Consumer-to-Consumer (C2C) E-commerce has been
growing at a very high speed in recent years. Since identical or
nearly-same kinds of products compete one another by relying on
keyword search in C2C E-commerce, some sellers describe their
products with spam keywords that are popular but are not related to
their products. Though such products get more chances to be retrieved
and selected by consumers than those without spam keywords,
the spam keywords mislead the consumers and waste their time.
This problem has been reported in many commercial services like
ebay and taobao, but there have been little research to solve this
problem. As a solution to this problem, this paper proposes a method
to classify whether keywords of a product are spam or not. The
proposed method assumes that a keyword for a given product is
more reliable if the keyword is observed commonly in specifications
of products which are the same or the same kind as the given
product. This is because that a hierarchical category of a product
in general determined precisely by a seller of the product and so is
the specification of the product. Since higher layers of the hierarchical
category represent more general kinds of products, a reliable degree
is differently determined according to the layers. Hence, reliable
degrees from different layers of a hierarchical category become
features for keywords and they are used together with features only
from specifications for classification of the keywords. Support Vector
Machines are adopted as a basic classifier using the features, since
it is powerful, and widely used in many classification tasks. In
the experiments, the proposed method is evaluated with a golden
standard dataset from Yi-han-wang, a Chinese C2C E-commerce,
and is compared with a baseline method that does not consider
the hierarchical category. The experimental results show that the
proposed method outperforms the baseline in F1-measure, which
proves that spam keywords are effectively identified by a hierarchical
category in C2C E-commerce.
Abstract: In this paper the issue of dimensionality reduction is
investigated in finger vein recognition systems using kernel Principal
Component Analysis (KPCA). One aspect of KPCA is to find the
most appropriate kernel function on finger vein recognition as there
are several kernel functions which can be used within PCA-based
algorithms. In this paper, however, another side of PCA-based
algorithms -particularly KPCA- is investigated. The aspect of
dimension of feature vector in PCA-based algorithms is of
importance especially when it comes to the real-world applications
and usage of such algorithms. It means that a fixed dimension of
feature vector has to be set to reduce the dimension of the input and
output data and extract the features from them. Then a classifier is
performed to classify the data and make the final decision. We
analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in
this paper and investigate the optimal feature extraction dimension in
finger vein recognition using KPCA.
Abstract: Biological conversion of biomass to methane has
received increasing attention in recent years. Grasses have been
explored for their potential anaerobic digestion to methane. In this
review, extensive literature data have been tabulated and classified.
The influences of several parameters on the potential of these
feedstocks to produce methane are presented. Lignocellulosic
biomass represents a mostly unused source for biogas and ethanol
production. Many factors, including lignin content, crystallinity of
cellulose, and particle size, limit the digestibility of the hemicellulose
and cellulose present in the lignocellulosic biomass. Pretreatments
have used to improve the digestibility of the lignocellulosic biomass.
Each pretreatment has its own effects on cellulose, hemicellulose and
lignin, the three main components of lignocellulosic biomass. Solidstate
anaerobic digestion (SS-AD) generally occurs at solid
concentrations higher than 15%. In contrast, liquid anaerobic
digestion (AD) handles feedstocks with solid concentrations between
0.5% and 15%. Animal manure, sewage sludge, and food waste are
generally treated by liquid AD, while organic fractions of municipal
solid waste (OFMSW) and lignocellulosic biomass such as crop
residues and energy crops can be processed through SS-AD. An
increase in operating temperature can improve both the biogas yield
and the production efficiency, other practices such as using AD
digestate or leachate as an inoculant or decreasing the solid content
may increase biogas yield but have negative impact on production
efficiency. Focus is placed on substrate pretreatment in anaerobic
digestion (AD) as a means of increasing biogas yields using today’s
diversified substrate sources.
Abstract: Bureaucracy reform program drives Indonesian
government to change their management to enhance their
organizational performance. Information technology became one of
strategic plan that organization tried to improve. Knowledge
management system is one of information system that supporting
knowledge management implementation in government which
categorized as people perspective, because this system has high
dependency in human interaction and participation. Strategic plan for
developing knowledge management system can be determine using
some of information system strategic methods. This research
conducted to define type of strategic method of information system,
stage of activity each method, strength and weakness. Literature
review methods used to identify and classify strategic methods of
information system, differentiate method type, categorize common
activities, strength and weakness. Result of this research are
determine and compare six strategic information system methods,
Balanced Scorecard and Risk Analysis believe as common strategic
method that usually used and have the highest excellence strength.
Abstract: Disasters are quite experienced in our days. They are
caused by floods, landslides, and building fires that is the main
objective of this study. To cope with these unexpected events,
precautions must be taken to protect human lives. The emphasis on
disposal work focuses on the resolution of the evacuation problem in
case of no-notice disaster. The problem of evacuation is listed as a
dynamic network flow problem. Particularly, we model the
evacuation problem as an earliest arrival flow problem with load
dependent transit time. This problem is classified as NP-Hard. Our
challenge here is to propose a metaheuristic solution for solving the
evacuation problem. We define our objective as the maximization of
evacuees during earliest periods of a time horizon T. The objective
provides the evacuation of persons as soon as possible. We
performed an experimental study on emergency evacuation from the
tunisian children’s hospital. This work prompts us to look for
evacuation plans corresponding to several situations where the
network dynamically changes.
Abstract: ‘Steganalysis’ is one of the challenging and attractive interests for the researchers with the development of information hiding techniques. It is the procedure to detect the hidden information from the stego created by known steganographic algorithm. In this paper, a novel feature based image steganalysis technique is proposed. Various statistical moments have been used along with some similarity metric. The proposed steganalysis technique has been designed based on transformation in four wavelet domains, which include Haar, Daubechies, Symlets and Biorthogonal. Each domain is being subjected to various classifiers, namely K-nearest-neighbor, K* Classifier, Locally weighted learning, Naive Bayes classifier, Neural networks, Decision trees and Support vector machines. The experiments are performed on a large set of pictures which are available freely in image database. The system also predicts the different message length definitions.
Abstract: In remote sensing, shadow causes problems in many
applications such as change detection and classification. It is caused
by objects which are elevated, thus can directly affect the accuracy of
information. For these reasons, it is very important to detect shadows
particularly in urban high spatial resolution imagery which created a
significant problem. This paper focuses on automatic shadow
detection based on a new spectral index for multispectral imagery
known as Shadow Detection Index (SDI). The new spectral index
was tested on different areas of WorldView-2 images and the results
demonstrated that the new spectral index has a massive potential to
extract shadows with accuracy of 94% effectively and automatically.
Furthermore, the new shadow detection index improved road
extraction from 82% to 93%.