Abstract: Software metric is a measure of some property of a
piece of software or its specification. The aim of this paper is to
present an application of evolutionary decision trees in software
engineering in order to classify the software modules that have or
have not one or more reported defects. For this some metrics are used
for detecting the class of modules with defects or without defects.
Abstract: A model of user behaviour based automated planning
is introduced in this work. The behaviour of users of web interactive
systems can be described in term of a planning domain encapsulating
the timed actions patterns representing the intended user profile. The
user behaviour recognition is then posed as a planning problem
where the goal is to parse a given sequence of user logs of the
observed activities while reaching a final state.
A general technique for transforming a timed finite state automata
description of the behaviour into a numerical parameter planning
model is introduced.
Experimental results show that the performance of a planning
based behaviour model is effective and scalable for real world
applications. A major advantage of the planning based approach is to
represent in a single automated reasoning framework problems of
plan recognitions, plan synthesis and plan optimisation.
Abstract: This paper presents the idea of a rough controller with application to control the overhead traveling crane system. The structure of such a controller is based on a suggested concept of a fuzzy logic controller. A measure of fuzziness in rough sets is introduced. A comparison between fuzzy logic controller and rough controller has been demonstrated. The results of a simulation comparing the performance of both controllers are shown. From these results we infer that the performance of the proposed rough controller is satisfactory.
Abstract: This paper presents an information retrieval model on
XML documents based on tree matching. Queries and documents are
represented by extended trees. An extended tree is built starting from
the original tree, with additional weighted virtual links between each
node and its indirect descendants allowing to directly reach each
descendant. Therefore only one level separates between each node
and its indirect descendants. This allows to compare the user query
and the document with flexibility and with respect to the structural
constraints of the query. The content of each node is very important to
decide weither a document element is relevant or not, thus the content
should be taken into account in the retrieval process. We separate
between the structure-based and the content-based retrieval processes.
The content-based score of each node is commonly based on the
well-known Tf × Idf criteria. In this paper, we compare between
this criteria and another one we call Tf × Ief. The comparison
is based on some experiments into a dataset provided by INEX1 to
show the effectiveness of our approach on one hand and those of
both weighting functions on the other.
Abstract: This paper examines the modeling and analysis of a
cruise control system using a Petri net based approach, task graphs,
invariant analysis and behavioral properties. It shows how the
structures used can be verified and optimized.
Abstract: In the last few years, the Semantic Web gained scientific acceptance as a means of relationships identification in knowledge base, widely known by semantic association. Query about complex relationships between entities is a strong requirement for many applications in analytical domains. In bioinformatics for example, it is critical to extract exchanges between proteins. Currently, the widely known result of such queries is to provide paths between connected entities from data graph. However, they do not always give good results while facing the user need by the best association or a set of limited best association, because they only consider all existing paths but ignore the path evaluation. In this paper, we present an approach for supporting association discovery queries. Our proposal includes (i) a query language PmSPRQL which provides a multiparadigm query expressions for association extraction and (ii) some quantification measures making easy the process of association ranking. The originality of our proposal is demonstrated by a performance evaluation of our approach on real world datasets.
Abstract: In today-s new technology era, cluster has become a
necessity for the modern computing and data applications since many
applications take more time (even days or months) for computation.
Although after parallelization, computation speeds up, still time
required for much application can be more. Thus, reliability of the
cluster becomes very important issue and implementation of fault
tolerant mechanism becomes essential. The difficulty in designing a
fault tolerant cluster system increases with the difficulties of various
failures. The most imperative obsession is that the algorithm, which
avoids a simple failure in a system, must tolerate the more severe
failures. In this paper, we implemented the theory of watchdog timer
in a parallel environment, to take care of failures. Implementation of
simple algorithm in our project helps us to take care of different
types of failures; consequently, we found that the reliability of this
cluster improves.
Abstract: The demand for higher performance graphics
continues to grow because of the incessant desire towards realism.
And, rapid advances in fabrication technology have enabled us to
build several processor cores on a single die. Hence, it is important to
develop single chip parallel architectures for such data-intensive
applications. In this paper, we propose an efficient PIM architectures
tailored for computer graphics which requires a large number of
memory accesses. We then address the two important tasks necessary
for maximally exploiting the parallelism provided by the architecture,
namely, partitioning and placement of graphic data, which affect
respectively load balances and communication costs. Under the
constraints of uniform partitioning, we develop approaches for optimal
partitioning and placement, which significantly reduce search space.
We also present heuristics for identifying near-optimal placement,
since the search space for placement is impractically large despite our
optimization. We then demonstrate the effectiveness of our partitioning
and placement approaches via analysis of example scenes; simulation
results show considerable search space reductions, and our heuristics
for placement performs close to optimal – the average ratio of
communication overheads between our heuristics and the optimal was
1.05. Our uniform partitioning showed average load-balance ratio of
1.47 for geometry processing and 1.44 for rasterization, which is
reasonable.
Abstract: In this paper, we introduce a novel platform
encryption method, which modify its keys and random number
generators step by step during encryption algorithms. According to
complexity of the proposed algorithm, it was safer than any other
method.
Abstract: The objective of the paper is twofold. First, to develop a
formal framework for planning for mobile agents. A logical language
based on a temporal logic is proposed that can express a type of
tasks which often arise in network management. Second, to design a
planning algorithm for such tasks. The aim of this paper is to study
the importance of finding plans for mobile agents. Although there
has been a lot of research in mobile agents, not much work has been
done to incorporate planning ideas for such agents. This paper makes
an attempt in this direction. A theoretical study of finding plans for
mobile agents is undertaken. A planning algorithm (based on the
paradigm of mobile computing) is proposed and its space, time, and
communication complexity is analyzed. The algorithm is illustrated
by working out an example in detail.
Abstract: This paper presents an improved variable ordering method to obtain the minimum number of nodes in Reduced Ordered Binary Decision Diagrams (ROBDD). The proposed method uses the graph topology to find the best variable ordering. Therefore the input Boolean function is converted to a unidirectional graph. Three levels of graph parameters are used to increase the probability of having a good variable ordering. The initial level uses the total number of nodes (NN) in all the paths, the total number of paths (NP) and the maximum number of nodes among all paths (MNNAP). The second and third levels use two extra parameters: The shortest path among two variables (SP) and the sum of shortest path from one variable to all the other variables (SSP). A permutation of the graph parameters is performed at each level for each variable order and the number of nodes is recorded. Experimental results are promising; the proposed method is found to be more effective in finding the variable ordering for the majority of benchmark circuits.
Abstract: Real-time embedded systems should benefit from
component-based software engineering to handle complexity and
deal with dependability. In these systems, applications should not
only be logically correct but also behave within time windows.
However, in the current component based software engineering
approaches, a few of component models handles time properties in
a manner that allows efficient analysis and checking at the
architectural level. In this paper, we present a meta-model for
component-based software description that integrates timing
issues. To achieve a complete functional model of software
components, our meta-model focuses on four functional aspects:
interface, static behavior, dynamic behavior, and interaction
protocol. With each aspect we have explicitly associated a time
model. Such a time model can be used to check a component-s
design against certain properties and to compute the timing
properties of component assemblies.
Abstract: In this work, a Modified Functional Link Artificial
Neural Network (M-FLANN) is proposed which is simpler than a
Multilayer Perceptron (MLP) and improves upon the universal
approximation capability of Functional Link Artificial Neural
Network (FLANN). MLP and its variants: Direct Linear Feedthrough
Artificial Neural Network (DLFANN), FLANN and
M-FLANN have been implemented to model a simulated Water Bath
System and a Continually Stirred Tank Heater (CSTH). Their
convergence speed and generalization ability have been compared.
The networks have been tested for their interpolation and
extrapolation capability using noise-free and noisy data. The results
show that M-FLANN which is computationally cheap, performs
better and has greater generalization ability than other networks
considered in the work.
Abstract: In this article we explore the application of a formal
proof system to verification problems in cryptography. Cryptographic
properties concerning correctness or security of some cryptographic
algorithms are of great interest. Beside some basic lemmata, we
explore an implementation of a complex function that is used in
cryptography. More precisely, we describe formal properties of this
implementation that we computer prove. We describe formalized
probability distributions (σ-algebras, probability spaces and conditional
probabilities). These are given in the formal language of the
formal proof system Isabelle/HOL. Moreover, we computer prove
Bayes- Formula. Besides, we describe an application of the presented
formalized probability distributions to cryptography. Furthermore,
this article shows that computer proofs of complex cryptographic
functions are possible by presenting an implementation of the Miller-
Rabin primality test that admits formal verification. Our achievements
are a step towards computer verification of cryptographic primitives.
They describe a basis for computer verification in cryptography.
Computer verification can be applied to further problems in cryptographic
research, if the corresponding basic mathematical knowledge
is available in a database.
Abstract: Main Memory Database systems (MMDB) store their
data in main physical memory and provide very high-speed access.
Conventional database systems are optimized for the particular
characteristics of disk storage mechanisms. Memory resident
systems, on the other hand, use different optimizations to structure
and organize data, as well as to make it reliable.
This paper provides a brief overview on MMDBs and one of the
memory resident systems named FastDB and compares the
processing time of this system with a typical disc resident database
based on the results of the implementation of TPC benchmarks
environment on both.
Abstract: Load balancing is the process of improving the
performance of a parallel and distributed system through a
redistribution of load among the processors [1] [5]. In this paper we
present the performance analysis of various load balancing
algorithms based on different parameters, considering two typical
load balancing approaches static and dynamic. The analysis indicates
that static and dynamic both types of algorithm can have
advancements as well as weaknesses over each other. Deciding type
of algorithm to be implemented will be based on type of parallel
applications to solve. The main purpose of this paper is to help in
design of new algorithms in future by studying the behavior of
various existing algorithms.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. After a preprocessing
step, the documents are typically represented as large sparse vectors.
When training classifiers on large collections of documents, both the
time and memory restrictions can be quite prohibitive. This justifies
the application of feature selection methods to reduce the
dimensionality of the document-representation vector. In this paper,
three feature selection methods are evaluated: Random Selection,
Information Gain (IG) and Support Vector Machine feature selection
(called SVM_FS). We show that the best results were obtained with
SVM_FS method for a relatively small dimension of the feature
vector. Also we present a novel method to better correlate SVM
kernel-s parameters (Polynomial or Gaussian kernel).
Abstract: Stochastic models of biological networks are well established in systems biology, where the computational treatment of such models is often focused on the solution of the so-called chemical master equation via stochastic simulation algorithms. In contrast to this, the development of storage-efficient model representations that are directly suitable for computer implementation has received significantly less attention. Instead, a model is usually described in terms of a stochastic process or a "higher-level paradigm" with graphical representation such as e.g. a stochastic Petri net. A serious problem then arises due to the exponential growth of the model-s state space which is in fact a main reason for the popularity of stochastic simulation since simulation suffers less from the state space explosion than non-simulative numerical solution techniques. In this paper we present transition class models for the representation of biological network models, a compact mathematical formalism that circumvents state space explosion. Transition class models can also serve as an interface between different higher level modeling paradigms, stochastic processes and the implementation coded in a programming language. Besides, the compact model representation provides the opportunity to apply non-simulative solution techniques thereby preserving the possible use of stochastic simulation. Illustrative examples of transition class representations are given for an enzyme-catalyzed substrate conversion and a part of the bacteriophage λ lysis/lysogeny pathway.
Abstract: Mobile agent has motivated the creation of a new
methodology for parallel computing. We introduce a methodology
for the creation of parallel applications on the network. The proposed
Mobile-Agent parallel processing framework uses multiple Javamobile
Agents. Each mobile agent can travel to the specified
machine in the network to perform its tasks. We also introduce the
concept of master agent, which is Java object capable of
implementing a particular task of the target application. Master agent
is dynamically assigns the task to mobile agents. We have developed
and tested a prototype application: Mobile Agent Based Parallel
Computing. Boosted by the inherited benefits of using Java and
Mobile Agents, our proposed methodology breaks the barriers
between the environments, and could potentially exploit in a parallel
manner all the available computational resources on the network.
This paper elaborates performance issues of a mobile agent for
parallel computing.
Abstract: This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.