Abstract: The Petri net tool INA is a well known tool by the
Petri net community. However, it lacks a graphical environment to
cerate and analyse INA models. Building a modelling tool for the
design and analysis from scratch (for INA tool for example) is
generally a prohibitive task. Meta-Modelling approach is useful to
deal with such problems since it allows the modelling of the
formalisms themselves. In this paper, we propose an approach based
on the combined use of Meta-modelling and Graph Grammars to
automatically generate a visual modelling tool for INA for analysis
purposes. In our approach, the UML Class diagram formalism is
used to define a meta-model of INA models. The meta-modelling
tool ATOM3 is used to generate a visual modelling tool according to
the proposed INA meta-model. We have also proposed a graph
grammar to automatically generate INA description of the
graphically specified Petri net models. This allows the user to avoid
the errors when this description is done manually. Then the INA tool
is used to perform the simulation and the analysis of the resulted INA
description. Our environment is illustrated through an example.
Abstract: Due to availability of powerful image processing software
and improvement of human computer knowledge, it becomes
easy to tamper images. Manipulation of digital images in different
fields like court of law and medical imaging create a serious problem
nowadays. Copy-move forgery is one of the most common types
of forgery which copies some part of the image and pastes it to
another part of the same image to cover an important scene. In
this paper, a copy-move forgery detection method proposed based
on Fourier transform to detect forgeries. Firstly, image is divided to
same size blocks and Fourier transform is performed on each block.
Similarity in the Fourier transform between different blocks provides
an indication of the copy-move operation. The experimental results
prove that the proposed method works on reasonable time and works
well for gray scale and colour images. Computational complexity
reduced by using Fourier transform in this method.
Abstract: The Emergency Department of a medical center in
Taiwan cooperated to conduct the research. A predictive model of
triage system is contracted from the contract procedure, selection of
parameters to sample screening. 2,000 pieces of data needed for the
patients is chosen randomly by the computer. After three
categorizations of data mining (Multi-group Discriminant Analysis,
Multinomial Logistic Regression, Back-propagation Neural
Networks), it is found that Back-propagation Neural Networks can
best distinguish the patients- extent of emergency, and the accuracy
rate can reach to as high as 95.1%. The Back-propagation Neural
Networks that has the highest accuracy rate is simulated into the triage
acuity expert system in this research. Data mining applied to the
predictive model of the triage acuity expert system can be updated
regularly for both the improvement of the system and for education
training, and will not be affected by subjective factors.
Abstract: The study of proteomics reached unexpected levels of
interest, as a direct consequence of its discovered influence over some
complex biological phenomena, such as problematic diseases like
cancer. This paper presents the latest authors- achievements regarding
the analysis of the networks of proteins (interactome networks), by
computing more efficiently the betweenness centrality measure. The
paper introduces the concept of betweenness centrality, and then
describes how betweenness computation can help the interactome net-
work analysis. Current sequential implementations for the between-
ness computation do not perform satisfactory in terms of execution
times. The paper-s main contribution is centered towards introducing
a speedup technique for the betweenness computation, based on
modified shortest path algorithms for sparse graphs. Three optimized
generic algorithms for betweenness computation are described and
implemented, and their performance tested against real biological
data, which is part of the IntAct dataset.
Abstract: The work presented in this paper focus on Knowledge Management services enabling CSCW (Computer Supported Cooperative Work) applications to provide an appropriate adaptation to the user and the situation in which the user is working. In this paper, we explain how a knowledge management system can be designed to support users in different situations exploiting contextual data, users' preferences, and profiles of involved artifacts (e.g., documents, multimedia files, mockups...). The presented work roots in the experience we had in the MILK project and early steps made in the MAIS project.
Abstract: With the development of ubiquitous computing,
current user interaction approaches with keyboard, mouse and pen
are not sufficient. Due to the limitation of these devices the useable
command set is also limited. Direct use of hands as an input device is
an attractive method for providing natural Human Computer
Interaction which has evolved from text-based interfaces through 2D
graphical-based interfaces, multimedia-supported interfaces, to fully
fledged multi-participant Virtual Environment (VE) systems.
Imagine the human-computer interaction of the future: A 3Dapplication
where you can move and rotate objects simply by moving
and rotating your hand - all without touching any input device. In this
paper a review of vision based hand gesture recognition is presented.
The existing approaches are categorized into 3D model based
approaches and appearance based approaches, highlighting their
advantages and shortcomings and identifying the open issues.
Abstract: Sequential mining methods efficiently discover all frequent sequential patterns included in sequential data. These methods use the support, which is the previous criterion that satisfies the Apriori property, to evaluate the frequency. However, the discovered patterns do not always correspond to the interests of analysts, because the patterns are common and the analysts cannot get new knowledge from the patterns. The paper proposes a new criterion, namely, the sequential interestingness, to discover sequential patterns that are more attractive for the analysts. The paper shows that the criterion satisfies the Apriori property and how the criterion is related to the support. Also, the paper proposes an efficient sequential mining method based on the proposed criterion. Lastly, the paper shows the effectiveness of the proposed method by applying the method to two kinds of sequential data.
Abstract: This paper presents and evaluates a new classification
method that aims to improve classifiers performances and speed up
their training process. The proposed approach, called labeled
classification, seeks to improve convergence of the BP (Back
propagation) algorithm through the addition of an extra feature
(labels) to all training examples. To classify every new example, tests
will be carried out each label. The simplicity of implementation is the
main advantage of this approach because no modifications are
required in the training algorithms. Therefore, it can be used with
others techniques of acceleration and stabilization. In this work, two
models of the labeled classification are proposed: the LMLP
(Labeled Multi Layered Perceptron) and the LNFC (Labeled Neuro
Fuzzy Classifier). These models are tested using Iris, wine, texture
and human thigh databases to evaluate their performances.
Abstract: In literature, there are metrics for identifying the
quality of reusable components but the framework that makes use of
these metrics to precisely predict reusability of software components
is still need to be worked out. These reusability metrics if identified
in the design phase or even in the coding phase can help us to reduce
the rework by improving quality of reuse of the software component
and hence improve the productivity due to probabilistic increase in
the reuse level. As CK metric suit is most widely used metrics for
extraction of structural features of an object oriented (OO) software;
So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO
and LCOM, is used to obtain the structural analysis of OO-based
software components. An algorithm has been proposed in which the
inputs can be given to K-Means Clustering system in form of
tuned values of the OO software component and decision tree is
formed for the 10-fold cross validation of data to evaluate the in
terms of linguistic reusability value of the component. The developed
reusability model has produced high precision results as desired.
Abstract: Scene interpretation systems need to match (often ambiguous)
low-level input data to concepts from a high-level ontology.
In many domains, these decisions are uncertain and benefit greatly
from proper context. This paper demonstrates the use of decision
trees for estimating class probabilities for regions described by feature
vectors, and shows how context can be introduced in order to improve
the matching performance.
Abstract: Documents retrieval in Information Retrieval
Systems (IRS) is generally about understanding of
information in the documents concern. The more the system
able to understand the contents of documents the more
effective will be the retrieval outcomes. But understanding of the
contents is a very complex task. Conventional IRS apply algorithms
that can only approximate the meaning of document contents through
keywords approach using vector space model. Keywords may be
unstemmed or stemmed. When keywords are stemmed and conflated
in retrieving process, we are a step forwards in applying semantic
technology in IRS. Word stemming is a process in morphological
analysis under natural language processing, before syntactic and
semantic analysis. We have developed algorithms for Malay and
Arabic and incorporated stemming in our experimental systems in
order to measure retrieval effectiveness. The results have shown that
the retrieval effectiveness has increased when stemming is used in
the systems.
Abstract: In this paper, we study on color transformation
method on website images for the color blind. The most common
category of color blindness is red-green color blindness which is
viewed as beige color. By transforming the colors of the images, the
color blind can improve their color visibility. They can have a better
view when browsing through the websites. To transform colors on
the website images, we study on two algorithms which are the
conversion techniques from RGB color space to HSV color space and
self-organizing color transformation. The comparative study focuses
on criteria based on the ease of use, quality, accuracy and efficiency.
The outcome of the study leads to enhancement of website images to
meet the color blinds- vision requirements in perceiving image
detailed.
Abstract: In this contribution an innovative platform is being
presented that integrates intelligent agents and evolutionary
computation techniques in legacy e-learning environments. It
introduces the design and development of a scalable and
interoperable integration platform supporting:
I) various assessment agents for e-learning environments,
II) a specific resource retrieval agent for the provision of
additional information from Internet sources matching the
needs and profile of the specific user and
III) a genetic algorithm designed to extract efficient information
(classifying rules) based on the students- answering input
data.
The agents are implemented in order to provide intelligent
assessment services based on computational intelligence techniques
such as Bayesian Networks and Genetic Algorithms.
The proposed Genetic Algorithm (GA) is used in order to extract
efficient information (classifying rules) based on the students-
answering input data. The idea of using a GA in order to fulfil this
difficult task came from the fact that GAs have been widely used in
applications including classification of unknown data.
The utilization of new and emerging technologies like web
services allows integrating the provided services to any web based
legacy e-learning environment.
Abstract: In this paper we compare the accuracy of data mining
methods to classifying students in order to predicting student-s class
grade. These predictions are more useful for identifying weak
students and assisting management to take remedial measures at early
stages to produce excellent graduate that will graduate at least with
second class upper. Firstly we examine single classifiers accuracy on
our data set and choose the best one and then ensembles it with a
weak classifier to produce simple voting method. We present results
show that combining different classifiers outperformed other single
classifiers for predicting student performance.
Abstract: This paper reports the study results on neural network
training algorithm of numerical optimization techniques multiface
detection in static images. The training algorithms involved are scale
gradient conjugate backpropagation, conjugate gradient
backpropagation with Polak-Riebre updates, conjugate gradient
backpropagation with Fletcher-Reeves updates, one secant
backpropagation and resilent backpropagation. The final result of
each training algorithms for multiface detection application will also
be discussed and compared.
Abstract: The information systems with incomplete attribute
values and fuzzy decisions commonly exist in practical problems. On
the base of the notion of variable precision rough set model for
incomplete information system and the rough set model for
incomplete and fuzzy decision information system, the variable rough
set model for incomplete and fuzzy decision information system is
constructed, which is the generalization of the variable precision
rough set model for incomplete information system and that of rough
set model for incomplete and fuzzy decision information system. The
knowledge reduction and heuristic algorithm, built on the method and
theory of precision reduction, are proposed.
Abstract: Grid scheduling is the process of mapping grid jobs to resources over multiple administrative domains. Traditionally, application-level schedulers have been tightly integrated with the application itself and were not easily applied to other applications. This design is generic that decouples the scheduler core (the search procedure) from the application-specific (e.g. application performance models) and platform-specific (e.g. collection of resource information) components used by the search procedure. In this decoupled approach the application details are not revealed completely to broker, but customer will give the application to resource provider for execution. In a decoupled approach, apart from scheduling, the resource selection can be performed independently in order to achieve scalability.
Abstract: The automatic discrimination of seismic signals is an important practical goal for earth-science observatories due to the large amount of information that they receive continuously. An essential discrimination task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, two classes of seismic signals recorded routinely in geophysical laboratory of the National Center for Scientific and Technical Research in Morocco are considered. They correspond to signals associated to local earthquakes and chemical explosions. The approach adopted for the development of an automatic discrimination system is a modular system composed by three blocs: 1) Representation, 2) Dimensionality reduction and 3) Classification. The originality of our work consists in the use of a new wavelet called "modified Mexican hat wavelet" in the representation stage. For the dimensionality reduction, we propose a new algorithm based on the random projection and the principal component analysis.
Abstract: The need for reputation assessment is particularly strong in peer-to-peer (P2P) systems because the peers' personal site autonomy is amplified by the inherent technological decentralization of the environment. However, the decentralization notion makes the problem of designing a peer-to-peer based reputation assessment substantially harder in P2P networks than in centralized settings.Existing reputation systems tackle the reputation assessment process in an ad-hoc manner. There is no systematic and coherent way to derive measures and analyze the current reputation systems. In this paper, we propose a reputation assessment process and use it to classify the existing reputation systems. Simulation experiments are conducted and focused on the different methods in selecting the recommendation sources and retrieving the recommendations. These two phases can contribute significantly to the overall performance due to communication cost and coverage.
Abstract: Super resolution (SR) technologies are now being
applied to video to improve resolution. Some TV sets are now
equipped with SR functions. However, it is not known if super
resolution image reconstruction (SRR) for TV really works or not.
Super resolution with non-linear signal processing (SRNL) has
recently been proposed. SRR and SRNL are the only methods for
processing video signals in real time. The results from subjective
assessments of SSR and SRNL are described in this paper. SRR video
was produced in simulations with quarter precision motion vectors and
100 iterations. These are ideal conditions for SRR. We found that the
image quality of SRNL is better than that of SRR even though SRR
was processed under ideal conditions.