Abstract: Although many studies on the assembly technology of
the bridge construction have dealt mostly with on the pier, girder or the
deck of the bridge, studies on the prefabricated barrier have rarely been
performed. For understanding structural characteristics and
application of the concrete barrier in the modular bridge, which is an
assembly of structure members, static loading test was performed.
Structural performances as a road barrier of the three methods,
conventional cast-in-place(ST), vertical bolt connection(BVC) and
horizontal bolt connection(BHC) were evaluated and compared
through the analyses of load-displacement curves, strain curves of the
steel, concrete strain curves and the visual appearances of crack
patterns. The vertical bolt connection(BVC) method demonstrated
comparable performance as an alternative to conventional
cast-in-place(ST) while providing all the advantages of prefabricated
technology. Necessities for the future improvement in nuts
enforcement as well as legal standard and regulation are also
addressed.
Abstract: Dengue disease is an infectious vector-borne viral
disease that is commonly found in tropical and sub-tropical regions,
especially in urban and semi-urban areas, around the world and
including Malaysia. There is no currently available vaccine or
chemotherapy for the prevention or treatment of dengue disease.
Therefore prevention and treatment of the disease depend on vector
surveillance and control measures. Disease risk mapping has been
recognized as an important tool in the prevention and control
strategies for diseases. The choice of statistical model used for
relative risk estimation is important as a good model will
subsequently produce a good disease risk map. Therefore, the aim of
this study is to estimate the relative risk for dengue disease based
initially on the most common statistic used in disease mapping called
Standardized Morbidity Ratio (SMR) and one of the earliest
applications of Bayesian methodology called Poisson-gamma model.
This paper begins by providing a review of the SMR method, which
we then apply to dengue data of Perak, Malaysia. We then fit an
extension of the SMR method, which is the Poisson-gamma model.
Both results are displayed and compared using graph, tables and
maps. Results of the analysis shows that the latter method gives a
better relative risk estimates compared with using the SMR. The
Poisson-gamma model has been demonstrated can overcome the
problem of SMR when there is no observed dengue cases in certain
regions. However, covariate adjustment in this model is difficult and
there is no possibility for allowing spatial correlation between risks in
adjacent areas. The drawbacks of this model have motivated many
researchers to propose other alternative methods for estimating the
risk.
Abstract: In recent years, a number of works proposing the
combination of multiple classifiers to produce a single
classification have been reported in remote sensing literature. The
resulting classifier, referred to as an ensemble classifier, is
generally found to be more accurate than any of the individual
classifiers making up the ensemble. As accuracy is the primary
concern, much of the research in the field of land cover
classification is focused on improving classification accuracy. This
study compares the performance of four ensemble approaches
(boosting, bagging, DECORATE and random subspace) with a
univariate decision tree as base classifier. Two training datasets,
one without ant noise and other with 20 percent noise was used to
judge the performance of different ensemble approaches. Results
with noise free data set suggest an improvement of about 4% in
classification accuracy with all ensemble approaches in
comparison to the results provided by univariate decision tree
classifier. Highest classification accuracy of 87.43% was achieved
by boosted decision tree. A comparison of results with noisy data
set suggests that bagging, DECORATE and random subspace
approaches works well with this data whereas the performance of
boosted decision tree degrades and a classification accuracy of
79.7% is achieved which is even lower than that is achieved (i.e.
80.02%) by using unboosted decision tree classifier.
Abstract: With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).
Abstract: In this paper, first, a characterization of spherical
Pseudo null curves in Semi-Euclidean space is given. Then, to
investigate position vector of a pseudo null curve, a system of
differential equation whose solution gives the components of the
position vector of a pseudo null curve on the Frenet axis is
established by means of Frenet equations. Additionally, in view of
some special solutions of mentioned system, characterizations of
some special pseudo null curves are presented.
Abstract: The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.
Abstract: There are three approaches to complete Bayesian
Network (BN) model construction: total expert-centred, total datacentred,
and semi data-centred. These three approaches constitute the
basis of the empirical investigation undertaken and reported in this
paper. The objective is to determine, amongst these three
approaches, which is the optimal approach for the construction of a
BN-based model for the performance assessment of students-
laboratory work in a virtual electronic laboratory environment. BN
models were constructed using all three approaches, with respect to
the focus domain, and compared using a set of optimality criteria. In
addition, the impact of the size and source of the training, on the
performance of total data-centred and semi data-centred models was
investigated. The results of the investigation provide additional
insight for BN model constructors and contribute to literature
providing supportive evidence for the conceptual feasibility and
efficiency of structure and parameter learning from data. In addition,
the results highlight other interesting themes.
Abstract: Not only is municipal pattern the institution basement of urban management, but it also determines the forms of the management results. There-s a considerable possibility of bankruptcy for China-s current municipal pattern as it-s an overdraft of land deal in fact. Based on the analysis of China-s current municipal pattern, the passage proposed an assumption of a new pattern verified legitimacy by conceptual as well as econometric models. Conclusion is: the added supernumerary value of investment in public goods was not included in China-s current municipal pattern, but hidden in the rising housing prices; we should set housing tax or municipal tax to optimize the municipal pattern, to correct the behavior of local governments and to ensure the regular development of China-s urbanization.
Abstract: Documents clustering become an essential technology
with the popularity of the Internet. That also means that fast and
high-quality document clustering technique play core topics. Text
clustering or shortly clustering is about discovering semantically
related groups in an unstructured collection of documents. Clustering
has been very popular for a long time because it provides unique
ways of digesting and generalizing large amounts of information.
One of the issues of clustering is to extract proper feature (concept)
of a problem domain. The existing clustering technology mainly
focuses on term weight calculation. To achieve more accurate
document clustering, more informative features including concept
weight are important. Feature Selection is important for clustering
process because some of the irrelevant or redundant feature may
misguide the clustering results. To counteract this issue, the proposed
system presents the concept weight for text clustering system
developed based on a k-means algorithm in accordance with the
principles of ontology so that the important of words of a cluster can
be identified by the weight values. To a certain extent, it has resolved
the semantic problem in specific areas.
Abstract: In the past years, the world has witnessed significant work in the field of Manufacturing. Special efforts have been made in the implementation of new technologies, management and control systems, among many others which have all evolved the field. Closely following all this, due to the scope of new projects and the need of turning the existing flexible ideas into more autonomous and intelligent ones, i.e.: moving toward a more intelligent manufacturing, the present paper emerges with the main aim of contributing to the analysis and a few customization issues of a new iCIM 3000 system at the IPSAM. In this process, special emphasis in made on the material flow problem. For this, besides offering a description and analysis of the system and its main parts, also some tips on how to define other possible alternative material flow scenarios and a partial analysis of the combinatorial nature of the problem are offered as well. All this is done with the intentions of relating it with the use of simulation tools, for which these have been briefly addressed with a special focus on the Witness simulation package. For a better comprehension, the previous elements are supported by a few figures and expressions which would help obtaining necessary data. Such data and others will be used in the future, when simulating the scenarios in the search of the best material flow configurations.
Abstract: In this paper we present semantic assistant agent
(SAA), an open source digital library agent which takes user query
for finding information in the digital library and takes resources-
metadata and stores it semantically. SAA uses Semantic Web to
improve browsing and searching for resources in digital library. All
metadata stored in the library are available in RDF format for
querying and processing by SemanSreach which is a part of SAA
architecture. The architecture includes a generic RDF-based model
that represents relationships among objects and their components.
Queries against these relationships are supported by an RDF triple
store.
Abstract: Group-III nitride material as particularly AlxGa1-xN is
one of promising optoelectronic materials to require for shortwavelength
devices. To achieve the high-quality AlxGa1-xN films for
a high performance of such devices, AlN-nucleation layers are the
important factor. To improve the AlN-nucleation layers with a
variation of Ga-addition, XRD measurements were conducted to
analyze the crystalline quality of the subsequent Al0.1Ga0.9N with the
minimum ω-FWHMs of (0002) and (10-10) reflections of 425 arcsec
and 750 arcsec, respectively. SEM and AFM measurements were
performed to observe the surface morphology and TEM
measurements to identify the microstructures and orientations.
Results showed that the optimized Ga-atoms in the Al(Ga)Nnucleation
layers improved the surface diffusion to form moreuniform
crystallites in structure and size, better alignment of each
crystallite, and better homogeneity of island distribution. This, hence,
improves the orientation of epilayers on the Si-surface and finally
improves the crystalline quality and reduces the residual strain of
subsequent Al0.1Ga0.9N layers.
Abstract: The approach of subset selection in polynomial
regression model building assumes that the chosen fixed full set of
predefined basis functions contains a subset that is sufficient to
describe the target relation sufficiently well. However, in most cases
the necessary set of basis functions is not known and needs to be
guessed – a potentially non-trivial (and long) trial and error process.
In our research we consider a potentially more efficient approach –
Adaptive Basis Function Construction (ABFC). It lets the model
building method itself construct the basis functions necessary for
creating a model of arbitrary complexity with adequate predictive
performance. However, there are two issues that to some extent
plague the methods of both the subset selection and the ABFC,
especially when working with relatively small data samples: the
selection bias and the selection instability. We try to correct these
issues by model post-evaluation using Cross-Validation and model
ensembling. To evaluate the proposed method, we empirically
compare it to ABFC methods without ensembling, to a widely used
method of subset selection, as well as to some other well-known
regression modeling methods, using publicly available data sets.
Abstract: This policy participation action research explores the
roles of Thai government units during its 2010 fiscal year on how to
create value added to recycling business in the central part of
Thailand. The research aims to a) study how the government plays a
role to support the business, and its problems and obstacles on
supporting the business, b) to design a strategic action – short,
medium, and long term plans -- to create value added to the recycling
business, particularly in local full-loop companies/organizations
licensed by Wongpanit Waste Separation Plant as well as those
licensed by the Department of Provincial Administration. Mixed
method research design, i.e., a combination of quantitative and
qualitative methods is utilized in the present study in both data
collection and analysis procedures. Quantitative data was analyzed
by frequency, percent value, mean scores, and standard deviation,
and aimed to note trend and generalizations. Qualitative data was
collected via semi-structured interviews/focus group interviews to
explore in-depth views of the operators. The sampling included 1,079
operators in eight provinces in the central part of Thailand.
Abstract: This paper applies Bayesian Networks to support
information extraction from unstructured, ungrammatical, and
incoherent data sources for semantic annotation. A tool has been
developed that combines ontologies, machine learning, and
information extraction and probabilistic reasoning techniques to
support the extraction process. Data acquisition is performed with the
aid of knowledge specified in the form of ontology. Due to the
variable size of information available on different data sources, it is
often the case that the extracted data contains missing values for
certain variables of interest. It is desirable in such situations to
predict the missing values. The methodology, presented in this paper,
first learns a Bayesian network from the training data and then uses it
to predict missing data and to resolve conflicts. Experiments have
been conducted to analyze the performance of the presented
methodology. The results look promising as the methodology
achieves high degree of precision and recall for information
extraction and reasonably good accuracy for predicting missing
values.
Abstract: This paper presents the experimental results on
artificial ageing test of 22 kV XLPE cable for distribution system
application in Thailand. XLPE insulating material of 22 kV cable
was sliced to 60-70 μm in thick and was subjected to ac high voltage
at 23
Ôùª
C, 60
Ôùª
C and 75
Ôùª
C. Testing voltage was constantly applied to
the specimen until breakdown. Breakdown voltage and time to
breakdown were used to evaluate life time of insulating material.
Furthermore, the physical model by J. P. Crine for predicts life time
of XLPE insulating material was adopted as life time model and was
calculated in order to compare the experimental results. Acceptable
life time results were obtained from Crine-s model comparing with
the experimental result. In addition, fourier transform infrared
spectroscopy (FTIR) for chemical analysis and scanning electron
microscope (SEM) for physical analysis were conducted on tested
specimens.
Abstract: Average current analysis checking the impact of
current flow is very important to guarantee the reliability of
semiconductor systems. As semiconductor process technologies
improve, the coupling capacitance often become bigger than self
capacitances. In this paper, we propose an analytic technique for
analyzing average current on interconnects in multi-conductor
structures. The proposed technique has shown to yield the acceptable
errors compared to HSPICE results while providing computational
efficiency.
Abstract: The spiral angle of the elementary cellulose fibril in
the wood cell wall, often called microfibril angle, (MFA). Microfibril
angle in hardwood is one of the key determinants of solid timber
performance due to its strong influence on the stiffness, strength,
shrinkage, swelling, thermal-dynamics mechanical properties and
dimensional stability of wood. Variation of MFA (degree) in the S2
layer of the cell walls among Acacia mangium trees was determined
using small-angle X-ray scattering (SAXS). The length and
orientation of the microfibrils of the cell walls in the irradiated
volume of the thin samples are measured using SAXS and optical
microscope for 3D surface measurement. The undetermined
parameters in the analysis are the MFA, (M) and the standard
deviation (σФ) of the intensity distribution arising from the wandering
of the fibril orientation about the mean value. Nine separate pairs of
values are determined for nine different values of the angle of the
incidence of the X-ray beam relative to the normal to the radial
direction in the sample. The results show good agreement. The
curve distribution of scattered intensity for the real cell wall structure
is compared with that calculated with that assembly of rectangular
cells with the same ratio of transverse to radial cell wall length. It is
demonstrated that for β = 45°, the peaks in the curve intensity
distribution for the real and the rectangular cells coincide. If this
peak position is Ф45, then the MFA can be determined from the
relation M = tan-1 (tan Ф45 / cos 45°), which is precise for rectangular
cells. It was found that 92.93% of the variation of MFA can be
attributed to the distance from pith to bark. Here we shall present our
results of the MFA in the cell wall with respect to its shape, structure
and the distance from pith to park as an important fast check and yet
accurate towards the quality of wood, its uses and application.
Abstract: e-Government structures permits the government to operate in a more transparent and accountable manner of which it increases the power of the individual in relation to that of the government. This paper identifies the factors that determine customer-s attitude towards e-Government services using a theoretical model based on the Technology Acceptance Model. Data relating to the constructs were collected from 200 respondents. The research model was tested using Structural Equation Modeling (SEM) techniques via the Analysis of Moment Structure (AMOS 16) computer software. SEM is a comprehensive approach to testing hypotheses about relations among observed and latent variables. The proposed model fits the data well. The results demonstrated that e- Government services acceptance can be explained in terms of compatibility and attitude towards e-Government services. The setup of the e-Government services will be compatible with the way users work and are more likely to adopt e-Government services owing to their familiarity with the Internet for various official, personal, and recreational uses. In addition, managerial implications for government policy makers, government agencies, and system developers are also discussed.
Abstract: The common bean is the most important grain legume for direct human consumption in the world and BCMV is one of the world's most serious bean diseases that can reduce yield and quality of harvested product. To determine the best tolerance index to BCMV and recognize tolerant genotypes, 2 experiments were conducted in field conditions. Twenty five common bean genotypes were sown in 2 separate RCB design with 3 replications under contamination and non-contamination conditions. On the basis of the results of indices correlations GMP, MP and HARM were determined as the most suitable tolerance indices. The results of principle components analysis indicated 2 first components totally explained 98.52% of variations among data. The first and second components were named potential yield and stress susceptible respectively. Based on the results of BCMV tolerance indices assessment and biplot analysis WA8563-4, WA8563-2 and Cardinal were the genotypes that exhibited potential seed yield under contamination and noncontamination conditions.