Abstract: With the rapid development in the field of life
sciences and the flooding of genomic information, the need for faster
and scalable searching methods has become urgent. One of the
approaches that were investigated is indexing. The indexing methods
have been categorized into three categories which are the lengthbased
index algorithms, transformation-based algorithms and mixed
techniques-based algorithms. In this research, we focused on the
transformation based methods. We embedded the N-gram method
into the transformation-based method to build an inverted index
table. We then applied the parallel methods to speed up the index
building time and to reduce the overall retrieval time when querying
the genomic database. Our experiments show that the use of N-Gram
transformation algorithm is an economical solution; it saves time and
space too. The result shows that the size of the index is smaller than
the size of the dataset when the size of N-Gram is 5 and 6. The
parallel N-Gram transformation algorithm-s results indicate that the
uses of parallel programming with large dataset are promising which
can be improved further.
Abstract: In recent years, a number of works proposing the
combination of multiple classifiers to produce a single
classification have been reported in remote sensing literature. The
resulting classifier, referred to as an ensemble classifier, is
generally found to be more accurate than any of the individual
classifiers making up the ensemble. As accuracy is the primary
concern, much of the research in the field of land cover
classification is focused on improving classification accuracy. This
study compares the performance of four ensemble approaches
(boosting, bagging, DECORATE and random subspace) with a
univariate decision tree as base classifier. Two training datasets,
one without ant noise and other with 20 percent noise was used to
judge the performance of different ensemble approaches. Results
with noise free data set suggest an improvement of about 4% in
classification accuracy with all ensemble approaches in
comparison to the results provided by univariate decision tree
classifier. Highest classification accuracy of 87.43% was achieved
by boosted decision tree. A comparison of results with noisy data
set suggests that bagging, DECORATE and random subspace
approaches works well with this data whereas the performance of
boosted decision tree degrades and a classification accuracy of
79.7% is achieved which is even lower than that is achieved (i.e.
80.02%) by using unboosted decision tree classifier.
Abstract: In this paper, the full state feedback controllers
capable of regulating and tracking the speed trajectory are presented.
A fourth order nonlinear mean value model of a 448 kW turbocharged
diesel engine published earlier is used for the purpose.
For designing controllers, the nonlinear model is linearized and
represented in state-space form. Full state feedback controllers
capable of meeting varying speed demands of drivers are presented.
Main focus here is to investigate sensitivity of the controller to the
perturbations in the parameters of the original nonlinear model.
Suggested controller is shown to be highly insensitive to the
parameter variations. This indicates that the controller is likely
perform with same accuracy even after significant wear and tear of
engine due to its use for years.
Abstract: The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.
Abstract: The number of features required to represent an image
can be very huge. Using all available features to recognize objects
can suffer from curse dimensionality. Feature selection and
extraction is the pre-processing step of image mining. Main issues in
analyzing images is the effective identification of features and
another one is extracting them. The mining problem that has been
focused is the grouping of features for different shapes. Experiments
have been conducted by using shape outline as the features. Shape
outline readings are put through normalization and dimensionality
reduction process using an eigenvector based method to produce a
new set of readings. After this pre-processing step data will be
grouped through their shapes. Through statistical analysis, these
readings together with peak measures a robust classification and
recognition process is achieved. Tests showed that the suggested
methods are able to automatically recognize objects through their
shapes. Finally, experiments also demonstrate the system invariance
to rotation, translation, scale, reflection and to a small degree of
distortion.
Abstract: There are three approaches to complete Bayesian
Network (BN) model construction: total expert-centred, total datacentred,
and semi data-centred. These three approaches constitute the
basis of the empirical investigation undertaken and reported in this
paper. The objective is to determine, amongst these three
approaches, which is the optimal approach for the construction of a
BN-based model for the performance assessment of students-
laboratory work in a virtual electronic laboratory environment. BN
models were constructed using all three approaches, with respect to
the focus domain, and compared using a set of optimality criteria. In
addition, the impact of the size and source of the training, on the
performance of total data-centred and semi data-centred models was
investigated. The results of the investigation provide additional
insight for BN model constructors and contribute to literature
providing supportive evidence for the conceptual feasibility and
efficiency of structure and parameter learning from data. In addition,
the results highlight other interesting themes.
Abstract: This paper investigates the spatial structure of employment in the Jakarta Metropolitan Area (JMA), with reference to the concept of the Southeast Asian extended metropolitan region (EMR). A combination of factor analysis and local Getis-Ord (Gi*) hot-spot analysis is used to identify clusters of employment in the region, including those of the urban and agriculture sectors. Spatial statistical analysis is further used to probe the spatial association of identified employment clusters with their surroundings on several dimensions, including the spatial association between the central business district (CBD) in Jakarta city on employment density in the region, the spatial impacts of urban expansion on population growth and the degree of urban-rural interaction. The degree of spatial interaction for the whole JMA is measured by the patterns of commuting trips destined to the various employment clusters. Results reveal the strong role of the urban core of Jakarta, and the regional CBD, as the centre for mixed job sectors such as retail, wholesale, services and finance. Manufacturing and local government services, on the other hand, form corridors radiating out of the urban core, reaching out to the agriculture zones in the fringes. Strong associations between the urban expansion corridors and population growth, and urban-rural mix, are revealed particularly in the eastern and western parts of JMA. Metropolitan wide commuting patterns are focussed on the urban core of Jakarta and the CBD, while relatively local commuting patterns are shown to be prevalent for the employment corridors.
Abstract: Documents clustering become an essential technology
with the popularity of the Internet. That also means that fast and
high-quality document clustering technique play core topics. Text
clustering or shortly clustering is about discovering semantically
related groups in an unstructured collection of documents. Clustering
has been very popular for a long time because it provides unique
ways of digesting and generalizing large amounts of information.
One of the issues of clustering is to extract proper feature (concept)
of a problem domain. The existing clustering technology mainly
focuses on term weight calculation. To achieve more accurate
document clustering, more informative features including concept
weight are important. Feature Selection is important for clustering
process because some of the irrelevant or redundant feature may
misguide the clustering results. To counteract this issue, the proposed
system presents the concept weight for text clustering system
developed based on a k-means algorithm in accordance with the
principles of ontology so that the important of words of a cluster can
be identified by the weight values. To a certain extent, it has resolved
the semantic problem in specific areas.
Abstract: This paper proposes a method, combining color and layout features, for identifying documents captured from low-resolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. Our identification method first uses the color information in the documents in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining of the search space.
Abstract: In the past years, the world has witnessed significant work in the field of Manufacturing. Special efforts have been made in the implementation of new technologies, management and control systems, among many others which have all evolved the field. Closely following all this, due to the scope of new projects and the need of turning the existing flexible ideas into more autonomous and intelligent ones, i.e.: moving toward a more intelligent manufacturing, the present paper emerges with the main aim of contributing to the analysis and a few customization issues of a new iCIM 3000 system at the IPSAM. In this process, special emphasis in made on the material flow problem. For this, besides offering a description and analysis of the system and its main parts, also some tips on how to define other possible alternative material flow scenarios and a partial analysis of the combinatorial nature of the problem are offered as well. All this is done with the intentions of relating it with the use of simulation tools, for which these have been briefly addressed with a special focus on the Witness simulation package. For a better comprehension, the previous elements are supported by a few figures and expressions which would help obtaining necessary data. Such data and others will be used in the future, when simulating the scenarios in the search of the best material flow configurations.
Abstract: Lately, significant work in the area of Intelligent
Manufacturing has become public and mainly applied within the
frame of industrial purposes. Special efforts have been made in the
implementation of new technologies, management and control
systems, among many others which have all evolved the field. Aware
of all this and due to the scope of new projects and the need of
turning the existing flexible ideas into more autonomous and
intelligent ones, i.e.: Intelligent Manufacturing, the present paper
emerges with the main aim of contributing to the design and analysis
of the material flow in either systems, cells or work stations under
this new “intelligent" denomination. For this, besides offering a
conceptual basis in some of the key points to be taken into account
and some general principles to consider in the design and analysis of
the material flow, also some tips on how to define other possible
alternative material flow scenarios and a classification of the states a
system, cell or workstation are offered as well. All this is done with
the intentions of relating it with the use of simulation tools, for which
these have been briefly addressed with a special focus on the Witness
simulation package. For a better comprehension, the previous
elements are supported by a detailed layout, other figures and a few
expressions which could help obtaining necessary data. Such data and
others will be used in the future, when simulating the scenarios in the
search of the best material flow configurations.
Abstract: Auckland has a temperate climate with comfortable warm, dry summers and mild, wet winters. Auckland house design not only focus on winter thermal performance and indoor thermal condition, but also indoor moisture control, which is closely related to indirect health effects such as dust mites, fungi, etc. Most Auckland houses are designed to use temporary heating for winter indoor thermal comfort. Based on field study data of indoor microclimate conditions of two Auckland townhouses with a whole home mechanical ventilation system or a passive wind directional skylight vent, this study is to evaluate and compare indoor moisture conditions of two insulated townhouses only using temporary heating with different ventilation systems.
Abstract: In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.
Abstract: This policy participation action research explores the
roles of Thai government units during its 2010 fiscal year on how to
create value added to recycling business in the central part of
Thailand. The research aims to a) study how the government plays a
role to support the business, and its problems and obstacles on
supporting the business, b) to design a strategic action – short,
medium, and long term plans -- to create value added to the recycling
business, particularly in local full-loop companies/organizations
licensed by Wongpanit Waste Separation Plant as well as those
licensed by the Department of Provincial Administration. Mixed
method research design, i.e., a combination of quantitative and
qualitative methods is utilized in the present study in both data
collection and analysis procedures. Quantitative data was analyzed
by frequency, percent value, mean scores, and standard deviation,
and aimed to note trend and generalizations. Qualitative data was
collected via semi-structured interviews/focus group interviews to
explore in-depth views of the operators. The sampling included 1,079
operators in eight provinces in the central part of Thailand.
Abstract: This study determines the effect of naked and heparinbased
super-paramagnetic iron oxide nanoparticles on the human
cancer cell lines of A2780. Doxorubicin was used as the anticancer
drug, entrapped in the SPIO-NPs. This study aimed to decorate
nanoparticles with heparin, a molecular ligand for 'active' targeting
of cancerous cells and the application of modified-nanoparticles in
cancer treatment. The nanoparticles containing the anticancer drug
DOX were prepared by a solvent evaporation and emulsification
cross-linking method. The physicochemical properties of the
nanoparticles were characterized by various techniques, and uniform
nanoparticles with an average particle size of 110±15 nm with high
encapsulation efficiencies (EE) were obtained. Additionally, a
sustained release of DOX from the SPIO-NPs was successful.
Cytotoxicity tests showed that the SPIO-DOX-HP had higher cell
toxicity than the individual HP and confocal microscopy analysis
confirmed excellent cellular uptake efficiency. These results indicate
that HP based SPIO-NPs have potential uses as anticancer drug
carriers and also have an enhanced anticancer effect.
Abstract: Recently, many existing partially blind signature scheme based on a single hard problem such as factoring, discrete logarithm, residuosity or elliptic curve discrete logarithm problems. However sooner or later these systems will become broken and vulnerable, if the factoring or discrete logarithms problems are cracked. This paper proposes a secured partially blind signature scheme based on factoring (FAC) problem and elliptic curve discrete logarithms (ECDL) problem. As the proposed scheme is focused on factoring and ECDLP hard problems, it has a solid structure and will totally leave the intruder bemused because it is very unlikely to solve the two hard problems simultaneously. In order to assess the security level of the proposed scheme a performance analysis has been conducted. Results have proved that the proposed scheme effectively deals with the partial blindness, randomization, unlinkability and unforgeability properties. Apart from this we have also investigated the computation cost of the proposed scheme. The new proposed scheme is robust and it is difficult for the malevolent attacks to break our scheme.
Abstract: The Internet is the global data communications
infrastructure based on the interconnection of both public and private
networks using protocols that implement Internetworking on a global
scale. Hence the control of protocol and infrastructure development,
resource allocation and network operation are crucial and interlinked
aspects. Internet Governance is the hotly debated and contentious
subject that refers to the global control and operation of key Internet
infrastructure such as domain name servers and resources such as
domain names. It is impossible to separate technical and political
positions as they are interlinked. Furthermore the existence of a
global market, transparency and competition impact upon Internet
Governance and related topics such as network neutrality and
security. Current trends and developments regarding Internet
governance with a focus on the policy-making process, security and
control have been observed to evaluate current and future
implications on the Internet. The multi stakeholder approach to
Internet Governance discussed in this paper presents a number of
opportunities, issues and developments that will affect the future
direction of the Internet. Internet operation, maintenance and
advisory organisations such as the Internet Corporation for Assigned
Names and Numbers (ICANN) or the Internet Governance Forum
(IGF) are currently in the process of formulating policies for future
Internet Governance. Given the controversial nature of the issues at
stake and the current lack of agreement it is predicted that
institutional as well as market governance will remain present for the
network access and content.
Abstract: Many difficulties are faced in the process of learning
computer programming. This paper will propose a system framework
intended to reduce cognitive load in learning programming. In first
section focus is given on the process of learning and the
shortcomings of the current approaches to learning programming.
Finally the proposed prototype is suggested along with the
justification of the prototype. In the proposed prototype the concept
map is used as visualization metaphor. Concept maps are similar to
the mental schema in long term memory and hence it can reduce
cognitive load well. In addition other method such as part code
method is also proposed in this framework to can reduce cognitive
load.
Abstract: Many footbridges have natural frequencies that
coincide with the dominant frequencies of the pedestrian-induced
load and therefore they have a potential to suffer excessive vibrations
under dynamic loads induced by pedestrians. Some of the design
standards introduce load models for pedestrian loads applicable for
simple structures. Load modeling for more complex structures, on the
other hand, is most often left to the designer. The main focus of this
paper is on the human induced forces transmitted to a footbridge and
on the ways these loads can be modeled to be used in the dynamic
design of footbridges. Also design criteria and load models proposed
by widely used standards were introduced and a comparison was
made. The dynamic analysis of the suspension bridge in Kolin in the
Czech Republic was performed on detailed FEM model using the
ANSYS program system. An attempt to model the load imposed by a
single person and a crowd of pedestrians resulted in displacements
and accelerations that are compared with serviceability criteria.
Abstract: This paper deals with the conceptual design of the
new aeroelastic demonstrator for the whirl flutter simulation. The
paper gives a theoretical background of the whirl flutter phenomenon
and describes the events of the whirl flutter occurrence in the
aerospace practice. The second part is focused on the experimental
research of the whirl flutter on aeroelastic similar models. Finally the
concept of the new aeroelastic demonstrator is described. The
demonstrator represents the wing and engine of the twin turboprop
commuter aircraft including a driven propeller. It allows the changes
of the main structural parameters influencing the whirl flutter
stability characteristics. It is intended for the experimental
investigation of the whirl flutter in the wind tunnel. The results will
be utilized for validation of analytical methods and software tools.