Abstract: Model mapping and transformation are important processes in high level system abstractions, and form the cornerstone of model-driven architecture (MDA) techniques. Considerable research in this field has devoted attention to static system abstraction, despite the fact that most systems are dynamic with high frequency changes in behavior. In this paper we provide an overview of work that has been done with regard to behavior model mapping and transformation, based on: (1) the completeness of the platform independent model (PIM); (2) semantics of behavioral models; (3) languages supporting behavior model transformation processes; and (4) an evaluation of model composition to effect the best approach to describing large systems with high complexity.
Abstract: Image fusion aims to enhance the perception
of a scene by combining important information captured by
different sensors. Dual-Tree Complex Wavelet (DT-CWT) has been
thouroughly investigated for image fusion, since it takes advantages
of approximate shift invariance and direction selectivity. But it can
only handle limited direction information. To allow a more flexible
directional expansion for images, we propose a novel fusion scheme,
referred to as complex contourlet transform (CCT). It successfully
incorporates directional filter banks (DFB) into DT-CWT. As a result
it efficiently deal with images containing contours and textures,
whereas it retains the property of shift invariance. Experimental
results demonstrated that the method features high quality fusion
performance and can facilitate many image processing applications.
Abstract: The iris recognition technology is the most accurate,
fast and less invasive one compared to other biometric techniques
using for example fingerprints, face, retina, hand geometry, voice or
signature patterns. The system developed in this study has the
potential to play a key role in areas of high-risk security and can
enable organizations with means allowing only to the authorized
personnel a fast and secure way to gain access to such areas. The
paper aim is to perform the iris region detection and iris inner and
outer boundaries localization. The system was implemented on
windows platform using Visual C# programming language. It is easy
and efficient tool for image processing to get great performance
accuracy. In particular, the system includes two main parts. The first
is to preprocess the iris images by using Canny edge detection
methods, segments the iris region from the rest of the image and
determine the location of the iris boundaries by applying Hough
transform. The proposed system tested on 756 iris images from 60
eyes of CASIA iris database images.
Abstract: This paper presents a conceptual model of agreement
options on negotiation support for civil engineering decision. The
negotiation support facilitates the solving of group choice decision
making problems in civil engineering decision to reduce the impact
of mud volcano disaster in Sidoarjo, Indonesia. The approach based
on application of analytical hierarchy process (AHP) method for
multi criteria decision on three level of decision hierarchy.
Decisions for reducing impact is very complicated since many
parties involved in a critical time. Where a number of stakeholders
are involved in choosing a single alternative from a set of solution
alternatives, there are different concern caused by differing
stakeholder preferences, experiences, and background. Therefore, a
group choice decision support is required to enable each stakeholder
to evaluate and rank the solution alternatives before engaging into
negotiation with the other stakeholders. Such civil engineering
solutions as alternatives are referred to as agreement options that are
determined by identifying the possible stakeholder choice, followed
by determining the optimal solution for each group of stakeholder.
Determination of the optimal solution is based on a game theory
model of n-person general sum game with complete information that
involves forming coalitions among stakeholders.
Abstract: Operational safety of critical systems, such as nuclear power plants, industrial chemical processes and means of transportation, is a major concern for system engineers and operators. A means to assure that is on-line safety monitors that deliver three safety tasks; fault detection and diagnosis, alarm annunciation and fault controlling. While current monitors deliver these tasks, benefits and limitations in their approaches have at the same time been highlighted. Drawing from those benefits, this paper develops a distributed monitor based on semi-independent agents, i.e. a multiagent system, and monitoring knowledge derived from a safety assessment model of the monitored system. Agents are deployed hierarchically and provided with knowledge portions and collaboration protocols to reason and integrate over the operational conditions of the components of the monitored system. The monitor aims to address limitations arising from the large-scale, complicated behaviour and distributed nature of monitored systems and deliver the aforementioned three monitoring tasks effectively.
Abstract: Random Access Memory (RAM) is an important
device in computer system. It can represent the snapshot on how the
computer has been used by the user. With the growth of its
importance, the computer memory has been an issue that has been
discussed in digital forensics. A number of tools have been developed
to retrieve the information from the memory. However, most of the
tools have their limitation in the ability of retrieving the important
information from the computer memory. Hence, this paper is aimed
to discuss the limitation and the setback for two main techniques such
as process signature search and process enumeration. Then, a new
hybrid approach will be presented to minimize the setback in both
individual techniques. This new approach combines both techniques
with the purpose to retrieve the information from the process block
and other objects in the computer memory. Nevertheless, the basic
theory in address translation for x86 platforms will be demonstrated
in this paper.
Abstract: A prototype of an anomaly detection system was
developed to automate process of recognizing an anomaly of
roentgen image by utilizing fuzzy histogram hyperbolization image
enhancement and back propagation artificial neural network.
The system consists of image acquisition, pre-processor, feature
extractor, response selector and output. Fuzzy Histogram
Hyperbolization is chosen to improve the quality of the roentgen
image. The fuzzy histogram hyperbolization steps consist of
fuzzyfication, modification of values of membership functions and
defuzzyfication. Image features are extracted after the the quality of
the image is improved. The extracted image features are input to the
artificial neural network for detecting anomaly. The number of nodes
in the proposed ANN layers was made small.
Experimental results indicate that the fuzzy histogram
hyperbolization method can be used to improve the quality of the
image. The system is capable to detect the anomaly in the roentgen
image.
Abstract: A logic model for analyzing complex systems- stability
is very useful to many areas of sciences. In the real world, we are
enlightened from some natural phenomena such as “biosphere", “food
chain", “ecological balance" etc. By research and practice, and taking
advantage of the orthogonality and symmetry defined by the theory of
multilateral matrices, we put forward a logic analysis model of
stability of complex systems with three relations, and prove it by
means of mathematics. This logic model is usually successful in
analyzing stability of a complex system. The structure of the logic
model is not only clear and simple, but also can be easily used to
research and solve many stability problems of complex systems. As an
application, some examples are given.
Abstract: Ethnicity identification of face images is of interest in
many areas of application, but existing methods are few and limited.
This paper presents a fusion scheme that uses block-based uniform
local binary patterns and Haar wavelet transform to combine local
and global features. In particular, the LL subband coefficients of the
whole face are fused with the histograms of uniform local binary
patterns from block partitions of the face. We applied the principal
component analysis on the fused features and managed to reduce the
dimensionality of the feature space from 536 down to around 15
without sacrificing too much accuracy. We have conducted a number
of preliminary experiments using a collection of 746 subject face
images. The test results show good accuracy and demonstrate the
potential of fusing global and local features. The fusion approach is
robust, making it easy to further improve the identification at both
feature and score levels.
Abstract: A big organization may have multiple branches spread across different locations. Processing of data from these branches becomes a huge task when innumerable transactions take place. Also, branches may be reluctant to forward their data for centralized processing but are ready to pass their association rules. Local mining may also generate a large amount of rules. Further, it is not practically possible for all local data sources to be of the same size. A model is proposed for discovering valid rules from different sized data sources where the valid rules are high weighted rules. These rules can be obtained from the high frequency rules generated from each of the data sources. A data source selection procedure is considered in order to efficiently synthesize rules. Support Equalization is another method proposed which focuses on eliminating low frequency rules at the local sites itself thus reducing the rules by a significant amount.
Abstract: Methods for organizing web data into groups in order
to analyze web-based hypertext data and facilitate data availability
are very important in terms of the number of documents available
online. Thereby, the task of clustering web-based document structures
has many applications, e.g., improving information retrieval on the
web, better understanding of user navigation behavior, improving web
users requests servicing, and increasing web information accessibility.
In this paper we investigate a new approach for clustering web-based
hypertexts on the basis of their graph structures. The hypertexts will
be represented as so called generalized trees which are more general
than usual directed rooted trees, e.g., DOM-Trees. As a important
preprocessing step we measure the structural similarity between the
generalized trees on the basis of a similarity measure d. Then,
we apply agglomerative clustering to the obtained similarity matrix
in order to create clusters of hypertext graph patterns representing
navigation structures. In the present paper we will run our approach
on a data set of hypertext structures and obtain good results in
Web Structure Mining. Furthermore we outline the application of
our approach in Web Usage Mining as future work.
Abstract: Application of Information Technology (IT) has
revolutionized the functioning of business all over the world. Its
impact has been felt mostly among the information of dependent
industries. Tourism is one of such industry. The conceptual
framework in this study represents an innovation of travel
information searching system on mobile devices which is used as
tools to deliver travel information (such as hotels, restaurants, tourist
attractions and souvenir shops) for each user by travelers
segmentation based on data mining technique to segment the tourists-
behavior patterns then match them with tourism products and
services. This system innovation is designed to be a knowledge
incremental learning. It is a marketing strategy to support business to
respond traveler-s demand effectively.
Abstract: Resource discovery is one of the chief services of a grid. A new approach to discover the provenances in grid through learning automata has been propounded in this article. The objective of the aforementioned resource-discovery service is to select the resource based upon the user-s applications and the mercantile yardsticks that is to say opting for an originator which can accomplish the user-s tasks in the most economic manner. This novel service is submitted in two phases. We proffered an applicationbased categorization by means of an intelligent nerve-prone plexus. The user in question sets his or her application as the input vector of the nerve-prone nexus. The output vector of the aforesaid network limns the appropriateness of any one of the resource for the presented executive procedure. The most scrimping option out of those put forward in the previous stage which can be coped with to fulfill the task in question is picked out. Te resource choice is carried out by means of the presented algorithm based upon the learning automata.
Abstract: When a small H/W IP is designed, we can develop an
appropriate verification environment by observing the simulated
signal waves, or using the serial test vectors for the fixed output. In the
case of design and verification of a massive parallel processor with
multiple IPs, it-s difficult to make a verification system with existing
common verification environment, and to verify each partial IP. A
TestDrive verification environment can build easy and reliable
verification system that can produce highly intuitive results by
applying Modelsim and SystemVerilog-s DPI. It shows many
advantages, for example a high-level design of a GPGPU processor
design can be migrate to FPGA board immediately.
Abstract: A new Markovianity approach is introduced in this
paper. This approach reduces the response time of classic Markov
Random Fields approach. First, one region is determinated by a
clustering technique. Then, this region is excluded from the study.
The remaining pixel form the study zone and they are selected for a
Markovianity segmentation task. With Selective Markovianity
approach, segmentation process is faster than classic one.
Abstract: A new automatic system for the recognition and re¬construction of resealed and/or rotated partially occluded objects is presented. The objects to be recognized are described by 2D views and each view is occluded by several half-planes. The whole object views and their visible parts (linear cuts) are then stored in a database. To establish if a region R of an input image represents an object possibly occluded, the system generates a set of linear cuts of R and compare them with the elements in the database. Each linear cut of R is associated to the most similar database linear cut. R is recognized as an instance of the object 0 if the majority of the linear cuts of R are associated to a linear cut of views of 0. In the case of recognition, the system reconstructs the occluded part of R and determines the scale factor and the orientation in the image plane of the recognized object view. The system has been tested on two different datasets of objects, showing good performance both in terms of recognition and reconstruction accuracy.
Abstract: Coloured Petri net (CPN) has been widely adopted in various areas in Computer Science, including protocol specification, performance evaluation, distributed systems and coordination in multi-agent systems. It provides a graphical representation of a system and has a strong mathematical foundation for proving various properties. This paper proposes a novel representation of a coloured Petri net using an extension of logic programming called abductive logic programming (ALP), which is purely based on classical logic. Under such a representation, an implementation of a CPN could be directly obtained, in which every inference step could be treated as a kind of equivalence preserved transformation. We would describe how to implement a CPN under such a representation using common meta-programming techniques in Prolog. We call our framework CPN-LP and illustrate its applications in modeling an intelligent agent.
Abstract: This paper presents a general trainable framework
for fast and robust upright human face and non-human object
detection and verification in static images. To enhance the
performance of the detection process, the technique we develop is
based on the combination of fast neural network (FNN) and
classical neural network (CNN). In FNN, a useful correlation is
exploited to sustain high level of detection accuracy between input
image and the weight of the hidden neurons. This is to enable the
use of Fourier transform that significantly speed up the time
detection. The combination of CNN is responsible to verify the
face region. A bootstrap algorithm is used to collect non human
object, which adds the false detection to the training process of the
human and non-human object. Experimental results on test images
with both simple and complex background demonstrate that the
proposed method has obtained high detection rate and low false
positive rate in detecting both human face and non-human object.
Abstract: Wireless Sensor Networks consist of small battery
powered devices with limited energy resources. once deployed, the
small sensor nodes are usually inaccessible to the user, and thus
replacement of the energy source is not feasible. Hence, One of the
most important issues that needs to be enhanced in order to improve
the life span of the network is energy efficiency. to overcome this
demerit many research have been done. The clustering is the one of
the representative approaches. in the clustering, the cluster heads
gather data from nodes and sending them to the base station. In this
paper, we introduce a dynamic clustering algorithm using genetic
algorithm. This algorithm takes different parameters into
consideration to increase the network lifetime. To prove efficiency of
proposed algorithm, we simulated the proposed algorithm compared
with LEACH algorithm using the matlab
Abstract: The rapid expansion of the web is causing the
constant growth of information, leading to several problems such as
increased difficulty of extracting potentially useful knowledge. Web
content mining confronts this problem gathering explicit information
from different web sites for its access and knowledge discovery.
Query interfaces of web databases share common building blocks.
After extracting information with parsing approach, we use a new
data mining algorithm to match a large number of schemas in
databases at a time. Using this algorithm increases the speed of
information matching. In addition, instead of simple 1:1 matching,
they do complex (m:n) matching between query interfaces. In this
paper we present a novel correlation mining algorithm that matches
correlated attributes with smaller cost. This algorithm uses Jaccard
measure to distinguish positive and negative correlated attributes.
After that, system matches the user query with different query
interfaces in special domain and finally chooses the nearest query
interface with user query to answer to it.