Abstract: This policy participation action research explores the
roles of Thai government units during its 2010 fiscal year on how to
create value added to recycling business in the central part of
Thailand. The research aims to a) study how the government plays a
role to support the business, and its problems and obstacles on
supporting the business, b) to design a strategic action – short,
medium, and long term plans -- to create value added to the recycling
business, particularly in local full-loop companies/organizations
licensed by Wongpanit Waste Separation Plant as well as those
licensed by the Department of Provincial Administration. Mixed
method research design, i.e., a combination of quantitative and
qualitative methods is utilized in the present study in both data
collection and analysis procedures. Quantitative data was analyzed
by frequency, percent value, mean scores, and standard deviation,
and aimed to note trend and generalizations. Qualitative data was
collected via semi-structured interviews/focus group interviews to
explore in-depth views of the operators. The sampling included 1,079
operators in eight provinces in the central part of Thailand.
Abstract: Particle Swarm Optimization (PSO) with elite PSO
parameters has been developed for power flow analysis under
practical constrained situations. Multiple solutions of the power flow
problem are useful in voltage stability assessment of power system.
A method of determination of multiple power flow solutions is
presented using a hybrid of Particle Swarm Optimization (PSO) and
local search technique. The unique and innovative learning factors of
the PSO algorithm are formulated depending upon the node power
mismatch values to be highly adaptive with the power flow problems.
The local search is applied on the pbest solution obtained by the PSO
algorithm in each iteration. The proposed algorithm performs reliably
and provides multiple solutions when applied on standard and illconditioned
systems. The test results show that the performances of
the proposed algorithm under critical conditions are better than the
conventional methods.
Abstract: Recently, many existing partially blind signature scheme based on a single hard problem such as factoring, discrete logarithm, residuosity or elliptic curve discrete logarithm problems. However sooner or later these systems will become broken and vulnerable, if the factoring or discrete logarithms problems are cracked. This paper proposes a secured partially blind signature scheme based on factoring (FAC) problem and elliptic curve discrete logarithms (ECDL) problem. As the proposed scheme is focused on factoring and ECDLP hard problems, it has a solid structure and will totally leave the intruder bemused because it is very unlikely to solve the two hard problems simultaneously. In order to assess the security level of the proposed scheme a performance analysis has been conducted. Results have proved that the proposed scheme effectively deals with the partial blindness, randomization, unlinkability and unforgeability properties. Apart from this we have also investigated the computation cost of the proposed scheme. The new proposed scheme is robust and it is difficult for the malevolent attacks to break our scheme.
Abstract: The POD-assisted projective integration method based on the equation-free framework is presented in this paper. The method is essentially based on the slow manifold governing of given system. We have applied two variants which are the “on-line" and “off-line" methods for solving the one-dimensional viscous Bergers- equation. For the on-line method, we have computed the slow manifold by extracting the POD modes and used them on-the-fly along the projective integration process without assuming knowledge of the underlying slow manifold. In contrast, the underlying slow manifold must be computed prior to the projective integration process for the off-line method. The projective step is performed by the forward Euler method. Numerical experiments show that for the case of nonperiodic system, the on-line method is more efficient than the off-line method. Besides, the online approach is more realistic when apply the POD-assisted projective integration method to solve any systems. The critical value of the projective time step which directly limits the efficiency of both methods is also shown.
Abstract: Following the loss of NASA's Space Shuttle
Columbia in 2003, it was determined that problems in the agency's
organization created an environment that led to the accident. One
component of the proposed solution resulted in the formation of the
NASA Engineering Network (NEN), a suite of information retrieval
and knowledge-sharing tools. This paper describes the
implementation of communities of practice, which are formed along
engineering disciplines. Communities of practice enable engineers to
leverage their knowledge and best practices to collaborate and take
information learning back to their jobs and embed it into the
procedures of the agency. This case study offers insight into using
traditional engineering disciplines for virtual collaboration, including
lessons learned during the creation and establishment of NASA-s
communities.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency
Abstract: In this research, an aerobic composting method is
studied to reuse organic waste from rubber factory waste as soil fertilizer and to study the effect of cellulolytic microbial activator
(CMA) as the activator in the rubber factory waste composting. The
performance of the composting process was monitored as a function
of carbon and organic matter decomposition rate, temperature and
moisture content. The results indicate that the rubber factory waste is best composted with water hyacinth and sludge than composted
alone. In addition, the CMA is more affective when mixed with the rubber factory waste, water hyacinth and sludge since a good fertilizer is achieved. When adding CMA into the rubber factory
waste composted alone, the finished product does not achieve a
standard of fertilizer, especially the C/N ratio.
Finally, the finished products of composting rubber factory waste and water hyacinth and sludge (both CMA and without CMA), can be an environmental friendly alternative to solve the disposal problems of rubber factory waste. Since the C/N ratio, pH, moisture
content, temperature, and nutrients of the finished products are acceptable for agriculture use.
Abstract: This paper studies questions of continuous data dependence and uniqueness for solutions of initial boundary value problems in linear micropolar thermoelastic mixtures. Logarithmic convexity arguments are used to establish results with no definiteness assumptions upon the internal energy.
Abstract: Parsing is important in Linguistics and Natural
Language Processing to understand the syntax and semantics of a
natural language grammar. Parsing natural language text is
challenging because of the problems like ambiguity and inefficiency.
Also the interpretation of natural language text depends on context
based techniques. A probabilistic component is essential to resolve
ambiguity in both syntax and semantics thereby increasing accuracy
and efficiency of the parser. Tamil language has some inherent
features which are more challenging. In order to obtain the solutions,
lexicalized and statistical approach is to be applied in the parsing
with the aid of a language model. Statistical models mainly focus on
semantics of the language which are suitable for large vocabulary
tasks where as structural methods focus on syntax which models
small vocabulary tasks. A statistical language model based on Trigram
for Tamil language with medium vocabulary of 5000 words has
been built. Though statistical parsing gives better performance
through tri-gram probabilities and large vocabulary size, it has some
disadvantages like focus on semantics rather than syntax, lack of
support in free ordering of words and long term relationship. To
overcome the disadvantages a structural component is to be
incorporated in statistical language models which leads to the
implementation of hybrid language models. This paper has attempted
to build phrase structured hybrid language model which resolves
above mentioned disadvantages. In the development of hybrid
language model, new part of speech tag set for Tamil language has
been developed with more than 500 tags which have the wider
coverage. A phrase structured Treebank has been developed with 326
Tamil sentences which covers more than 5000 words. A hybrid
language model has been trained with the phrase structured Treebank
using immediate head parsing technique. Lexicalized and statistical
parser which employs this hybrid language model and immediate
head parsing technique gives better results than pure grammar and
trigram based model.
Abstract: E-Appointment Scheduling (EAS) has been developed
to handle appointment for UMP students, lecturers in Faculty of
Computer Systems & Software Engineering (FCSSE) and Student
Medical Center. The schedules are based on the timetable and
university activities. Constraints Logic Programming (CLP) has been
implemented to solve the scheduling problems by giving
recommendation to the users in part of determining any available
slots from the lecturers and doctors- timetable. By using this system,
we can avoid wasting time and cost because this application will set
an appointment by auto-generated. In addition, this system can be an
alternative to the lecturers and doctors to make decisions whether to
approve or reject the appointments.
Abstract: The present study presents a new approach to automatic
data clustering and classification problems in large and complex
databases and, at the same time, derives specific types of explicit rules
describing each cluster. The method works well in both sparse and
dense multidimensional data spaces. The members of the data space
can be of the same nature or represent different classes. A number
of N-dimensional ellipsoids are used for enclosing the data clouds.
Due to the geometry of an ellipsoid and its free rotation in space
the detection of clusters becomes very efficient. The method is based
on genetic algorithms that are used for the optimization of location,
orientation and geometric characteristics of the hyper-ellipsoids. The
proposed approach can serve as a basis for the development of
general knowledge systems for discovering hidden knowledge and
unexpected patterns and rules in various large databases.
Abstract: In this work, the plate bending formulation of the boundary element method - BEM, based on the Reissner?s hypothesis, is extended to the analysis of plates reinforced by beams taking into account the membrane effects. The formulation is derived by assuming a zoned body where each sub-region defines a beam or a slab and all of them are represented by a chosen reference surface. Equilibrium and compatibility conditions are automatically imposed by the integral equations, which treat this composed structure as a single body. In order to reduce the number of degrees of freedom, the problem values defined on the interfaces are written in terms of their values on the beam axis. Initially are derived separated equations for the bending and stretching problems, but in the final system of equations the two problems are coupled and can not be treated separately. Finally are presented some numerical examples whose analytical results are known to show the accuracy of the proposed model.
Abstract: The use of artificial neural network (ANN) modeling
for prediction and forecasting variables in water resources
engineering are being increasing rapidly. Infrastructural applications
of ANN in terms of selection of inputs, architecture of networks,
training algorithms, and selection of training parameters in different
types of neural networks used in water resources engineering have
been reported. ANN modeling conducted for water resources
engineering variables (river sediment and discharge) published in
high impact journals since 2002 to 2011 have been examined and
presented in this review. ANN is a vigorous technique to develop
immense relationship between the input and output variables, and
able to extract complex behavior between the water resources
variables such as river sediment and discharge. It can produce robust
prediction results for many of the water resources engineering
problems by appropriate learning from a set of examples. It is
important to have a good understanding of the input and output
variables from a statistical analysis of the data before network
modeling, which can facilitate to design an efficient network. An
appropriate training based ANN model is able to adopt the physical
understanding between the variables and may generate more effective
results than conventional prediction techniques.
Abstract: Nowadays, organizing a repository of documents and
resources for learning on a special field as Information Technology
(IT), together with search techniques based on domain knowledge or
document-s content is an urgent need in practice of teaching, learning
and researching. There have been several works related to methods of
organization and search by content. However, the results are still
limited and insufficient to meet user-s demand for semantic
document retrieval. This paper presents a solution for the
organization of a repository that supports semantic representation and
processing in search. The proposed solution is a model which
integrates components such as an ontology describing domain
knowledge, a database of document repository, semantic
representation for documents and a file system; with problems,
semantic processing techniques and advanced search techniques
based on measuring semantic similarity. The solution is applied to
build a IT learning materials management system of a university with
semantic search function serving students, teachers, and manager as
well. The application has been implemented, tested at the University
of Information Technology, Ho Chi Minh City, Vietnam and has
achieved good results.
Abstract: Knowledge management is a process taking any steps
that needed to get the most out of available knowledge resources.
KM involved several steps; capturing the knowledge discovering
new knowledge, sharing the knowledge and applied the knowledge in
the decision making process. In applying the knowledge, it is not
necessary for the individual that use the knowledge to comprehend it
as long as the available knowledge is used in guiding the decision
making and actions. When an expert is called and he provides stepby-
step procedure on how to solve the problems to the caller, the
expert is transferring the knowledge or giving direction to the caller.
And the caller is 'applying' the knowledge by following the
instructions given by the expert. An appropriate mechanism is
needed to ensure effective knowledge transfer which in this case is
by telephone or email. The problem with email and telephone is that
the knowledge is not fully circulated and disseminated to all users. In
this paper, with related experience of local university Help Desk, it is
proposed the usage of Information Technology (IT)to effectively
support the knowledge transfer in the organization. The issues
covered include the existing knowledge, the related works, the
methodology used in defining the knowledge management
requirements as well the overview of the prototype.
Abstract: It is well known that Logistic Regression is the gold
standard method for predicting clinical outcome, especially
predicting risk of mortality. In this paper, the Decision Tree method
has been proposed to solve specific problems that commonly use
Logistic Regression as a solution. The Biochemistry and
Haematology Outcome Model (BHOM) dataset obtained from
Portsmouth NHS Hospital from 1 January to 31 December 2001 was
divided into four subsets. One subset of training data was used to
generate a model, and the model obtained was then applied to three
testing datasets. The performance of each model from both methods
was then compared using calibration (the χ2 test or chi-test) and
discrimination (area under ROC curve or c-index). The experiment
presented that both methods have reasonable results in the case of the
c-index. However, in some cases the calibration value (χ2) obtained
quite a high result. After conducting experiments and investigating
the advantages and disadvantages of each method, we can conclude
that Decision Trees can be seen as a worthy alternative to Logistic
Regression in the area of Data Mining.
Abstract: Since prestressed concrete members rely on the tensile
strength of the prestressing strands to resist loads, loss of even few
them could result catastrophic. Therefore, it is important to measure
present residual prestress force. Although there are some techniques
for obtaining present prestress force, some problems still remain. One
method is to install load cell in front of anchor head but this may
increase cost. Load cell is a transducer using the elastic material
property. Anchor head is also an elastic material and this might result
in monitoring monitor present prestress force. Features of fiber optic
sensor such as small size, great sensitivity, high durability can assign
sensing function to anchor head. This paper presents the concept of
smart anchor head which acts as load cell and experiment for the
applicability of it. Test results showed the smart anchor head worked
good and strong linear relationship between load and response.
Abstract: Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.
Abstract: Urban road network traffic has become one of the
most studied research topics in the last decades. This is mainly due to
the enlargement of the cities and the growing number of motor
vehicles traveling in this road network. One of the most sensitive
problems is to verify if the network is congestion-free. Another
related problem is the automatic reconfiguration of the network
without building new roads to alleviate congestions. These problems
require an accurate model of the traffic to determine the steady state
of the system. An alternative is to simulate the traffic to see if there
are congestions and when and where they occur. One key issue is to
find an adequate model for road intersections. Once the model
established, either a large scale model is built or the intersection is
represented by its performance measures and simulation for analysis.
In both cases, it is important to seek the queueing model to represent
the road intersection. In this paper, we propose to model the road
intersection as a BCMP queueing network and we compare this
analytical model against a simulation model for validation.
Abstract: Routing places an important role in determining the
quality of service in wireless networks. The routing methods adopted
in wireless networks have many drawbacks. This paper aims to
review the current routing methods used in wireless networks. This
paper proposes an innovative solution to overcome the problems in
routing. This solution is aimed at improving the Quality of Service.
This solution is different from others as it involves the resuage of the
part of the virtual circuits. This improvement in quality of service is
important especially in propagation of multimedia applications like
video, animations etc. So it is the dire need to propose a new solution
to improve the quality of service in ATM wireless networks for
multimedia applications especially during this era of multimedia
based applications.