Abstract: The number of features required to represent an image
can be very huge. Using all available features to recognize objects
can suffer from curse dimensionality. Feature selection and
extraction is the pre-processing step of image mining. Main issues in
analyzing images is the effective identification of features and
another one is extracting them. The mining problem that has been
focused is the grouping of features for different shapes. Experiments
have been conducted by using shape outline as the features. Shape
outline readings are put through normalization and dimensionality
reduction process using an eigenvector based method to produce a
new set of readings. After this pre-processing step data will be
grouped through their shapes. Through statistical analysis, these
readings together with peak measures a robust classification and
recognition process is achieved. Tests showed that the suggested
methods are able to automatically recognize objects through their
shapes. Finally, experiments also demonstrate the system invariance
to rotation, translation, scale, reflection and to a small degree of
distortion.
Abstract: In order to make surfing the internet faster, and to save redundant processing load with each request for the same web page, many caching techniques have been developed to reduce latency of retrieving data on World Wide Web. In this paper we will give a quick overview of existing web caching techniques used for dynamic web pages then we will introduce a design and implementation model that take advantage of “URL Rewriting" feature in some popular web servers, e.g. Apache, to provide an effective approach of caching dynamic web pages.
Abstract: There are three approaches to complete Bayesian
Network (BN) model construction: total expert-centred, total datacentred,
and semi data-centred. These three approaches constitute the
basis of the empirical investigation undertaken and reported in this
paper. The objective is to determine, amongst these three
approaches, which is the optimal approach for the construction of a
BN-based model for the performance assessment of students-
laboratory work in a virtual electronic laboratory environment. BN
models were constructed using all three approaches, with respect to
the focus domain, and compared using a set of optimality criteria. In
addition, the impact of the size and source of the training, on the
performance of total data-centred and semi data-centred models was
investigated. The results of the investigation provide additional
insight for BN model constructors and contribute to literature
providing supportive evidence for the conceptual feasibility and
efficiency of structure and parameter learning from data. In addition,
the results highlight other interesting themes.
Abstract: This paper demonstrates the bus location system for
the route bus through the experiment in the real environment. A
bus location system is a system that provides information such as
the bus delay and positions. This system uses actual services and
positions data of buses, and those information should match data
on the database. The system has two possible problems. One, the
system could cost high in preparing devices to get bus positions.
Two, it could be difficult to match services data of buses. To avoid
these problems, we have developed this system at low cost and short
time by using the smart phone with GPS and the bus route system.
This system realizes the path planning considering bus delay and
displaying position of buses on the map. The bus location system
was demonstrated on route buses with smart phones for two months.
Abstract: Models are placed by modeling paradigm at the center of development process. These models are represented by languages, like UML the language standardized by the OMG which became necessary for development. Moreover the ontology engineering paradigm places ontologies at the center of development process; in this paradigm we find OWL the principal language for knowledge representation. Building ontologies from scratch is generally a difficult task. The bridging between UML and OWL appeared on several regards such as the classes and associations. In this paper, we have to profit from convergence between UML and OWL to propose an approach based on Meta-Modelling and Graph Grammars and registered in the MDA architecture for the automatic generation of OWL ontologies from UML class diagrams. The transformation is based on transformation rules; the level of abstraction in these rules is close to the application in order to have usable ontologies. We illustrate this approach by an example.
Abstract: Smart Dust particles, are small smart materials used for generating weather maps. We investigate question of the optimal number of Smart Dust particles necessary for generating precise, computationally feasible and cost effective 3–D weather maps. We also give an optimal matching algorithm for the generalized scenario, when there are N Smart Dust particles and M ground receivers.
Abstract: Computerized alarm systems have been applied
increasingly to nuclear power plants. For existing plants, an add-on
computer alarm system is often installed to the control rooms. Alarm
avalanches during the plant transients are major problems with the
alarm systems in nuclear power plants. Computerized alarm systems
can process alarms to reduce the number of alarms during the plant
transients. This paper describes various alarm processing methods, an
alarm cause tracking function, and various alarm presentation schemes
to show alarm information to the operators effectively which are
considered during the development of several computerized alarm
systems for Korean nuclear power plants and are found to be helpful to
the operators.
Abstract: The complex hybrid and nonlinear nature of many processes that are met in practice causes problems with both structure modelling and parameter identification; therefore, obtaining a model that is suitable for MPC is often a difficult task. The basic idea of this paper is to present an identification method for a piecewise affine (PWA) model based on a fuzzy clustering algorithm. First we introduce the PWA model. Next, we tackle the identification method. We treat the fuzzy clustering algorithm, deal with the projections of the fuzzy clusters into the input space of the PWA model and explain the estimation of the parameters of the PWA model by means of a modified least-squares method. Furthermore, we verify the usability of the proposed identification approach on a hybrid nonlinear batch reactor example. The result suggest that the batch reactor can be efficiently identified and thus formulated as a PWA model, which can eventually be used for model predictive control purposes.
Abstract: Property investment in the real estate industry has a
high risk due to the uncertainty factors that will affect the decisions
made and high cost. Analytic hierarchy process has existed for some
time in which referred to an expert-s opinion to measure the
uncertainty of the risk factors for the risk analysis. Therefore,
different level of experts- experiences will create different opinion
and lead to the conflict among the experts in the field. The objective
of this paper is to propose a new technique to measure the uncertainty
of the risk factors based on multidimensional data model and data
mining techniques as deterministic approach. The propose technique
consist of a basic framework which includes four modules: user,
technology, end-user access tools and applications. The property
investment risk analysis defines as a micro level analysis as the
features of the property will be considered in the analysis in this
paper.
Abstract: We have developed a distributed asynchronous Web
based training system. In order to improve the scalability and robustness
of this system, all contents and a function are realized on
mobile agents. These agents are distributed to computers, and they
can use a Peer to Peer network that modified Content-Addressable
Network. In this system, all computers offer the function and exercise
by themselves. However, the system that all computers do the same
behavior is not realistic. In this paper, as a solution of this issue,
we present an e-Learning system that is composed of computers
of different participation types. Enabling the computer of different
participation types will improve the convenience of the system.
Abstract: This paper presents a dynamic adaptation scheme for
the frequency of inter-deme migration in distributed genetic algorithms
(GA), and its VLSI hardware design. Distributed GA,
or multi-deme-based GA, uses multiple populations which evolve
concurrently. The purpose of dynamic adaptation is to improve
convergence performance so as to obtain better solutions. Through
simulation experiments, we proved that our scheme achieves better
performance than fixed frequency migration schemes.
Abstract: Target tracking and localization are important applications
in wireless sensor networks. In these applications, sensor nodes
collectively monitor and track the movement of a target. They have
limited energy supplied by batteries, so energy efficiency is essential
for sensor networks. Most existing target tracking protocols need to
wake up sensors periodically to perform tracking. Some unnecessary
energy waste is thus introduced. In this paper, an energy efficient
protocol for target localization is proposed. In order to preserve
energy, the protocol fixes the number of sensors for target tracking,
but it retains the quality of target localization in an acceptable
level. By selecting a set of sensors for target localization, the other
sensors can sleep rather than periodically wake up to track the target.
Simulation results show that the proposed protocol saves a significant
amount of energy and also prolongs the network lifetime.
Abstract: This paper presents the use of anti-sway angle control
approaches for a two-dimensional gantry crane with disturbances
effect in the dynamic system. Delayed feedback signal (DFS) and
proportional-derivative (PD)-type fuzzy logic controller are the
techniques used in this investigation to actively control the sway
angle of the rope of gantry crane system. A nonlinear overhead
gantry crane system is considered and the dynamic model of the
system is derived using the Euler-Lagrange formulation. A complete
analysis of simulation results for each technique is presented in time
domain and frequency domain respectively. Performances of both
controllers are examined in terms of sway angle suppression and
disturbances cancellation. Finally, a comparative assessment of the
impact of each controller on the system performance is presented and
discussed.
Abstract: In this paper, we present a comparative study between two computer vision systems for objects recognition and tracking, these algorithms describe two different approach based on regions constituted by a set of pixels which parameterized objects in shot sequences. For the image segmentation and objects detection, the FCM technique is used, the overlapping between cluster's distribution is minimized by the use of suitable color space (other that the RGB one). The first technique takes into account a priori probabilities governing the computation of various clusters to track objects. A Parzen kernel method is described and allows identifying the players in each frame, we also show the importance of standard deviation value research of the Gaussian probability density function. Region matching is carried out by an algorithm that operates on the Mahalanobis distance between region descriptors in two subsequent frames and uses singular value decomposition to compute a set of correspondences satisfying both the principle of proximity and the principle of exclusion.
Abstract: Documents clustering become an essential technology
with the popularity of the Internet. That also means that fast and
high-quality document clustering technique play core topics. Text
clustering or shortly clustering is about discovering semantically
related groups in an unstructured collection of documents. Clustering
has been very popular for a long time because it provides unique
ways of digesting and generalizing large amounts of information.
One of the issues of clustering is to extract proper feature (concept)
of a problem domain. The existing clustering technology mainly
focuses on term weight calculation. To achieve more accurate
document clustering, more informative features including concept
weight are important. Feature Selection is important for clustering
process because some of the irrelevant or redundant feature may
misguide the clustering results. To counteract this issue, the proposed
system presents the concept weight for text clustering system
developed based on a k-means algorithm in accordance with the
principles of ontology so that the important of words of a cluster can
be identified by the weight values. To a certain extent, it has resolved
the semantic problem in specific areas.
Abstract: When reconstructing a scenario, it is necessary to
know the structure of the elements present on the scene to have an
interpretation. In this work we link 3D scenes reconstruction to
evolutionary algorithms through the vision stereo theory. We
consider vision stereo as a method that provides the reconstruction of
a scene using only a couple of images of the scene and performing
some computation. Through several images of a scene, captured from
different positions, vision stereo can give us an idea about the threedimensional
characteristics of the world. Vision stereo usually
requires of two cameras, making an analogy to the mammalian vision
system. In this work we employ only a camera, which is translated
along a path, capturing images every certain distance. As we can not
perform all computations required for an exhaustive reconstruction,
we employ an evolutionary algorithm to partially reconstruct the
scene in real time. The algorithm employed is the fly algorithm,
which employ “flies" to reconstruct the principal characteristics of
the world following certain evolutionary rules.
Abstract: This paper proposes a method, combining color and layout features, for identifying documents captured from low-resolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. Our identification method first uses the color information in the documents in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining of the search space.
Abstract: In this paper we present semantic assistant agent
(SAA), an open source digital library agent which takes user query
for finding information in the digital library and takes resources-
metadata and stores it semantically. SAA uses Semantic Web to
improve browsing and searching for resources in digital library. All
metadata stored in the library are available in RDF format for
querying and processing by SemanSreach which is a part of SAA
architecture. The architecture includes a generic RDF-based model
that represents relationships among objects and their components.
Queries against these relationships are supported by an RDF triple
store.
Abstract: The belief K-modes method (BKM) approach is a new
clustering technique handling uncertainty in the attribute values of
objects in both the cluster construction task and the classification one.
Like the standard version of this method, the BKM results depend on
the chosen initial modes. So, one selection method of initial modes
is developed, in this paper, aiming at improving the performances of
the BKM approach. Experiments with several sets of real data show
that by considered the developed selection initial modes method, the
clustering algorithm produces more accurate results.
Abstract: Clustering in high dimensional space is a difficult
problem which is recurrent in many fields of science and
engineering, e.g., bioinformatics, image processing, pattern
reorganization and data mining. In high dimensional space some of
the dimensions are likely to be irrelevant, thus hiding the possible
clustering. In very high dimensions it is common for all the objects in
a dataset to be nearly equidistant from each other, completely
masking the clusters. Hence, performance of the clustering algorithm
decreases.
In this paper, we propose an algorithmic framework which
combines the (reduct) concept of rough set theory with the k-means
algorithm to remove the irrelevant dimensions in a high dimensional
space and obtain appropriate clusters. Our experiment on test data
shows that this framework increases efficiency of the clustering
process and accuracy of the results.