Abstract: There are many approaches proposed for solving
Sudoku puzzles. One of them is by modelling the puzzles as block
world problems. There have been three model for Sudoku solvers
based on this approach. Each model expresses Sudoku solver as
a parameterized multi agent systems. In this work, we propose a
new model which is an improvement over the existing models. This
paper presents the development of a Sudoku solver that implements
all the proposed models. Some experiments have been conducted to
determine the performance of each model.
Abstract: Recently, various services such as television and the
Internet have come to be received through various terminals.
However, we could gain greater convenience by receiving these
services through cellular phone terminals when we go out and then
continuing to receive the same services through a large screen digital
television after we have come home. However, it is necessary to go
through the same authentication processing again when using TVs
after we have come home. In this study, we have developed an
authentication method that enables users to switch terminals in
environments in which the user receives service from a server through
a terminal. Specifically, the method simplifies the authentication of
the server side when switching from one terminal to another terminal
by using previously authenticated information.
Abstract: The competitive learning is an adaptive process in
which the neurons in a neural network gradually become sensitive to
different input pattern clusters. The basic idea behind the Kohonen-s
Self-Organizing Feature Maps (SOFM) is competitive learning.
SOFM can generate mappings from high-dimensional signal spaces
to lower dimensional topological structures. The main features of this
kind of mappings are topology preserving, feature mappings and
probability distribution approximation of input patterns. To overcome
some limitations of SOFM, e.g., a fixed number of neural units and a
topology of fixed dimensionality, Growing Self-Organizing Neural
Network (GSONN) can be used. GSONN can change its topological
structure during learning. It grows by learning and shrinks by
forgetting. To speed up the training and convergence, a new variant
of GSONN, twin growing cell structures (TGCS) is presented here.
This paper first gives an introduction to competitive learning, SOFM
and its variants. Then, we discuss some GSONN with fixed
dimensionality, which include growing cell structures, its variants
and the author-s model: TGCS. It is ended with some testing results
comparison and conclusions.
Abstract: Face recognition is a technique to automatically
identify or verify individuals. It receives great attention in
identification, authentication, security and many more applications.
Diverse methods had been proposed for this purpose and also a lot of
comparative studies were performed. However, researchers could not
reach unified conclusion. In this paper, we are reporting an extensive
quantitative accuracy analysis of four most widely used face
recognition algorithms: Principal Component Analysis (PCA),
Independent Component Analysis (ICA), Linear Discriminant
Analysis (LDA) and Support Vector Machine (SVM) using AT&T,
Sheffield and Bangladeshi people face databases under diverse
situations such as illumination, alignment and pose variations.
Abstract: Testable software has two inherent properties – observability and controllability. Observability facilitates observation of internal behavior of software to required degree of detail. Controllability allows creation of difficult-to-achieve states prior to execution of various tests. In this paper, we describe COTT, a Controllability and Observability Testing Tool, to create testable object-oriented software. COTT provides a framework that helps the user to instrument object-oriented software to build the required controllability and observability. During testing, the tool facilitates creation of difficult-to-achieve states required for testing of difficultto- test conditions and observation of internal details of execution at unit, integration and system levels. The execution observations are logged in a test log file, which are used for post analysis and to generate test coverage reports.
Abstract: In this paper we improve the quasilinearization method by barycentric Lagrange interpolation because of its numerical stability and computation speed to achieve a stable semi analytical solution. Then we applied the improved method for solving the Fin problem which is a nonlinear equation that occurs in the heat transferring. In the quasilinearization approach the nonlinear differential equation is treated by approximating the nonlinear terms by a sequence of linear expressions. The modified QLM is iterative but not perturbative and gives stable semi analytical solutions to nonlinear problems without depending on the existence of a smallness parameter. Comparison with some numerical solutions shows that the present solution is applicable.
Abstract: PARIS (Personal Archiving and Retrieving Image
System) is an experiment personal photograph library, which includes
more than 80,000 of consumer photographs accumulated within a
duration of approximately five years, metadata based on our proposed
MPEG-7 annotation architecture, Dozen Dimensional Digital Content
(DDDC), and a relational database structure. The DDDC architecture
is specially designed for facilitating the managing, browsing and
retrieving of personal digital photograph collections. In annotating
process, we also utilize a proposed Spatial and Temporal Ontology
(STO) designed based on the general characteristic of personal
photograph collections. This paper explains PRAIS system.
Abstract: In this paper we describe a hybrid technique of Minimax search and aggregate Mahalanobis distance function synthesis to evolve Awale game player. The hybrid technique helps to suggest a move in a short amount of time without looking into endgame database. However, the effectiveness of the technique is heavily dependent on the training dataset of the Awale strategies utilized. The evolved player was tested against Awale shareware program and the result is appealing.
Abstract: This paper presents a protocol aiming at proving that an encryption system contains structural weaknesses without disclosing any information on those weaknesses. A verifier can check in a polynomial time that a given property of the cipher system output has been effectively realized. This property has been chosen by the prover in such a way that it cannot been achieved by known attacks or exhaustive search but only if the prover indeed knows some undisclosed weaknesses that may effectively endanger the cryptosystem security. This protocol has been denoted zero-knowledge-like proof of cryptanalysis. In this paper, we apply this protocol to the Bluetooth core encryption algorithm E0, used in many mobile environments and thus we suggest that its security can seriously be put into question.
Abstract: In this paper we investigated a number of the Internet
congestion control algorithms that has been developed in the last few
years. It was obviously found that many of these algorithms were
designed to deal with the Internet traffic merely as a train of
consequent packets. Other few algorithms were specifically tailored
to handle the Internet congestion caused by running media traffic that
represents audiovisual content. This later set of algorithms is
considered to be aware of the nature of this media content. In this
context we briefly explained a number of congestion control
algorithms and hence categorized them into the two following
categories: i) Media congestion control algorithms. ii) Common
congestion control algorithms. We hereby recommend the usage of
the media congestion control algorithms for the reason of being
media content-aware rather than the other common type of
algorithms that blindly manipulates such traffic. We showed that the
spread of such media content-aware algorithms over Internet will
lead to better congestion control status in the coming years. This is
due to the observed emergence of the era of digital convergence
where the media traffic type will form the majority of the Internet
traffic.
Abstract: IMCS is Integrated Monitoring and Control System for
thermal power plant. This system consists of mainly two parts; controllers and OIS (Operator Interface System). These two parts are
connected by Ethernet-based communication. The controller side of communication is managed by CNet module and OIS side is managed
by data server of OIS. CNet module sends the data of controller to data
server and receives commend data from data server. To minimizes or
balance the load of data server, this module buffers data created by controller at every cycle and send buffered data to data server on request of data server. For multiple data server, this module manages
the connection line with each data server and response for each request
from multiple data server. CNet module is included in each controller
of redundant system. When controller fail-over happens on redundant system, this module can provide data of controller to data sever
without loss. This paper presents three main features – separation of get task, usage of ring buffer and monitoring communication status –of CNet module to carry out these functions.
Abstract: This paper presents the design and implements the prototype of an intelligent data processing framework in ubiquitous sensor networks. Much focus is put on how to handle the sensor data stream as well as the interoperability between the low-level sensor data and application clients. Our framework first addresses systematic middleware which mitigates the interaction between the application layer and low-level sensors, for the sake of analyzing a great volume of sensor data by filtering and integrating to create value-added context information. Then, an agent-based architecture is proposed for real-time data distribution to efficiently forward a specific event to the appropriate application registered in the directory service via the open interface. The prototype implementation demonstrates that our framework can host a sophisticated application on the ubiquitous sensor network and it can autonomously evolve to new middleware, taking advantages of promising technologies such as software agents, XML, cloud computing, and the like.
Abstract: Cognitive models allow predicting some aspects of utility
and usability of human machine interfaces (HMI), and simulating
the interaction with these interfaces. The action of predicting is based
on a task analysis, which investigates what a user is required to do
in terms of actions and cognitive processes to achieve a task. Task
analysis facilitates the understanding of the system-s functionalities.
Cognitive models are part of the analytical approaches, that do not
associate the users during the development process of the interface.
This article presents a study about the evaluation of a human
machine interaction with a contextual assistant-s interface using ACTR
and GOMS cognitive models. The present work shows how these
techniques may be applied in the evaluation of HMI, design and
research by emphasizing firstly the task analysis and secondly the
time execution of the task. In order to validate and support our
results, an experimental study of user performance is conducted at
the DOMUS laboratory, during the interaction with the contextual
assistant-s interface. The results of our models show that the GOMS
and ACT-R models give good and excellent predictions respectively
of users performance at the task level, as well as the object level.
Therefore, the simulated results are very close to the results obtained
in the experimental study.
Abstract: This paper gives an overview of how an OWL
ontology has been created to represent template knowledge models
defined in CML that are provided by CommonKADS.
CommonKADS is a mature knowledge engineering methodology
which proposes the use of template knowledge model for knowledge
modelling. The aim of developing this ontology is to present the
template knowledge model in a knowledge representation language
that can be easily understood and shared in the knowledge
engineering community. Hence OWL is used as it has become a
standard for ontology and also it already has user friendly tools for
viewing and editing.
Abstract: Clustering is a very well known technique in data mining. One of the most widely used clustering techniques is the k-means algorithm. Solutions obtained from this technique are dependent on the initialization of cluster centers. In this article we propose a new algorithm to initialize the clusters. The proposed algorithm is based on finding a set of medians extracted from a dimension with maximum variance. The algorithm has been applied to different data sets and good results are obtained.
Abstract: Querying a data source and routing data towards sink
becomes a serious challenge in static wireless sensor networks if sink
and/or data source are mobile. Many a times the event to be observed
either moves or spreads across wide area making maintenance of
continuous path between source and sink a challenge. Also, sink can
move while query is being issued or data is on its way towards sink.
In this paper, we extend our already proposed Grid Based Data
Dissemination (GBDD) scheme which is a virtual grid based
topology management scheme restricting impact of movement of
sink(s) and event(s) to some specific cells of a grid. This obviates the
need for frequent path modifications and hence maintains continuous
flow of data while minimizing the network energy consumptions.
Simulation experiments show significant improvements in network
energy savings and average packet delay for a packet to reach at sink.
Abstract: In today-s new technology era, cluster has become a
necessity for the modern computing and data applications since many
applications take more time (even days or months) for computation.
Although after parallelization, computation speeds up, still time
required for much application can be more. Thus, reliability of the
cluster becomes very important issue and implementation of fault
tolerant mechanism becomes essential. The difficulty in designing a
fault tolerant cluster system increases with the difficulties of various
failures. The most imperative obsession is that the algorithm, which
avoids a simple failure in a system, must tolerate the more severe
failures. In this paper, we implemented the theory of watchdog timer
in a parallel environment, to take care of failures. Implementation of
simple algorithm in our project helps us to take care of different
types of failures; consequently, we found that the reliability of this
cluster improves.
Abstract: With the fast evolution of digital data exchange, security information becomes much important in data storage and transmission. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. In this paper, we analyze the Advanced Encryption Standard (AES), and we add a key stream generator (A5/1, W7) to AES to ensure improving the encryption performance; mainly for images characterised by reduced entropy. The implementation of both techniques has been realized for experimental purposes. Detailed results in terms of security analysis and implementation are given. Comparative study with traditional encryption algorithms is shown the superiority of the modified algorithm.
Abstract: In this paper, an improved technique for contingency
ranking using artificial neural network (ANN) is presented. The
proposed approach is based on multi-layer perceptrons trained by
backpropagation to contingency analysis. Severity indices in dynamic
stability assessment are presented. These indices are based on the
concept of coherency and three dot products of the system variables.
It is well known that some indices work better than others for a
particular power system. This paper along with test results using
several different systems, demonstrates that combination of indices
with ANN provides better ranking than a single index. The presented
results are obtained through the use of power system simulation
(PSS/E) and MATLAB 6.5 software.
Abstract: In Grid computing, a data transfer protocol called
GridFTP has been widely used for efficiently transferring a large volume
of data. Currently, two versions of GridFTP protocols, GridFTP
version 1 (GridFTP v1) and GridFTP version 2 (GridFTP v2), have
been proposed in the GGF. GridFTP v2 supports several advanced
features such as data streaming, dynamic resource allocation, and
checksum transfer, by defining a transfer mode called X-block mode.
However, in the literature, effectiveness of GridFTP v2 has not been
fully investigated. In this paper, we therefore quantitatively evaluate
performance of GridFTP v1 and GridFTP v2 using mathematical
analysis and simulation experiments. We reveal the performance
limitation of GridFTP v1, and quantitatively show effectiveness of
GridFTP v2. Through several numerical examples, we show that by
utilizing the data streaming feature, the average file transfer time of
GridFTP v2 is significantly smaller than that of GridFTP v1.