Abstract: A serious problem on the WWW is finding reliable
information. Not everything found on the Web is true and the
Semantic Web does not change that in any way. The problem will be
even more crucial for the Semantic Web, where agents will be
integrating and using information from multiple sources. Thus, if an
incorrect premise is used due to a single faulty source, then any
conclusions drawn may be in error. Thus, statements published on
the Semantic Web have to be seen as claims rather than as facts, and
there should be a way to decide which among many possibly
inconsistent sources is most reliable. In this work, we propose a trust
model for the Semantic Web. The proposed model is inspired by the
use trust in human society. Trust is a type of social knowledge and
encodes evaluations about which agents can be taken as reliable
sources of information or services. Our proposed model allows
agents to decide which among different sources of information to
trust and thus act rationally on the semantic web.
Abstract: The drastic increase in the usage of SMS technology
has led service providers to seek for a solution that enable users of
mobile devices to access services through SMSs. This has resulted in
the proposal of solutions towards SMS-based service invocation in
service oriented environments. However, the dynamic nature of
service-oriented environments coupled with sudden load peaks
generated by service request, poses performance challenges to
infrastructures for supporting SMS-based service invocation. To
address this problem we adopt load balancing techniques. A load
balancing model with adaptive load balancing and load monitoring
mechanisms as its key constructs is proposed. The load balancing
model then led to realization of Least Loaded Load Balancing
Framework (LLLBF). Evaluation of LLLBF benchmarked with round
robin (RR) scheme on the queuing approach showed LLLBF
outperformed RR in terms of response time and throughput.
However, LLLBF achieved better result in the cost of high
processing power.
Abstract: In this paper, we propose use of convolutional codes
for file dispersal. The proposed method is comparable in complexity
to the information Dispersal Algorithm proposed by M.Rabin and for
particular choices of (non-binary) convolutional codes, is almost as
efficient as that algorithm in terms of controlling expansion in the
total storage. Further, our proposed dispersal method allows string
search.
Abstract: Deoxyribonucleic Acid or DNA computing has
emerged as an interdisciplinary field that draws together chemistry,
molecular biology, computer science and mathematics. Thus, in this
paper, the possibility of DNA-based computing to solve an absolute
1-center problem by molecular manipulations is presented. This is
truly the first attempt to solve such a problem by DNA-based
computing approach. Since, part of the procedures involve with
shortest path computation, research works on DNA computing for
shortest path Traveling Salesman Problem, in short, TSP are reviewed.
These approaches are studied and only the appropriate one is adapted
in designing the computation procedures. This DNA-based
computation is designed in such a way that every path is encoded by
oligonucleotides and the path-s length is directly proportional to the
length of oligonucleotides. Using these properties, gel electrophoresis
is performed in order to separate the respective DNA molecules
according to their length. One expectation arise from this paper is that
it is possible to verify the instance absolute 1-center problem using
DNA computing by laboratory experiments.
Abstract: In any trust model, the two information sources that a peer relies on to predict trustworthiness of another peer are direct experience as well as reputation. These two vital components evolve over time. Trust evolution is an important issue, where the objective is to observe a sequence of past values of a trust parameter and determine the future estimates. Unfortunately, trust evolution algorithms received little attention and the proposed algorithms in the literature do not comply with the conditions and the nature of trust. This paper contributes to this important problem in the following ways: (a) presents an algorithm that manages and models trust evolution in a P2P environment, (b) devises new mechanisms for effectively maintaining trust values based on the conditions that influence trust evolution , and (c) introduces a new methodology for incorporating trust-nurture incentives into the trust evolution algorithm. Simulation experiments are carried out to evaluate our trust evolution algorithm.
Abstract: Stipples are desired for pattern fillings and
transparency effects. In contrast, some graphics standards, including
OpenGL ES 1.1 and 2.0, omitted this feature. We represent details of
providing line stipples and polygon stipples, through combining
texture mapping and alpha blending functions. We start from the
OpenGL-specified stipple-related API functions. The details of
mathematical transformations are explained to get the correct texture
coordinates. Then, the overall algorithm is represented, and its
implementation results are followed. We accomplished both of line
and polygon stipples, and verified its result with conformance test
routines.
Abstract: In recent years, new product development became more and more competitive and globalized, and the designing phase is critical for the product success. The concept of modularity can provide the necessary foundation for organizations to design products that can respond rapidly to market needs. The paper describes data structures and algorithms of intelligent Web-based system for modular design taking into account modules compatibility relationship and given design requirements. The system intelligence is realized by developed algorithms for choice of modules reflecting all system restrictions and requirements. The proposed data structure and algorithms are illustrated by case study of personal computer configuration. The applicability of the proposed approach is tested through a prototype of Web-based system.
Abstract: Program slicing is the task of finding all statements in
a program that directly or indirectly influence the value of a variable
occurrence. The set of statements that can affect the value of a
variable at some point in a program is called a program backward
slice. In several software engineering applications, such as program
debugging and measuring program cohesion and parallelism, several
slices are computed at different program points. The existing
algorithms for computing program slices are introduced to compute a
slice at a program point. In these algorithms, the program, or the
model that represents the program, is traversed completely or
partially once. To compute more than one slice, the same algorithm
is applied for every point of interest in the program. Thus, the same
program, or program representation, is traversed several times.
In this paper, an algorithm is introduced to compute all forward
static slices of a computer program by traversing the program
representation graph once. Therefore, the introduced algorithm is
useful for software engineering applications that require computing
program slices at different points of a program. The program
representation graph used in this paper is called Program Dependence
Graph (PDG).
Abstract: The paper explores the development of an optimization of method and apparatus for retrieving extended high dynamic range from digital negative image. Architectural photo imaging can benefit from high dynamic range imaging (HDRI) technique for preserving and presenting sufficient luminance in the shadow and highlight clipping image areas. The HDRI technique that requires multiple exposure images as the source of HDRI rendering may not be effective in terms of time efficiency during the acquisition process and post-processing stage, considering it has numerous potential imaging variables and technical limitations during the multiple exposure process. This paper explores an experimental method and apparatus that aims to expand the dynamic range from digital negative image in HDRI environment. The method and apparatus explored is based on a single source of RAW image acquisition for the use of HDRI post-processing. It will cater the optimization in order to avoid and minimize the conventional HDRI photographic errors caused by different physical conditions during the photographing process and the misalignment of multiple exposed image sequences. The study observes the characteristics and capabilities of RAW image format as digital negative used for the retrieval of extended high dynamic range process in HDRI environment.
Abstract: The image segmentation method described in this
paper has been developed as a pre-processing stage to be used in
methodologies and tools for video/image indexing and retrieval by
content. This method solves the problem of whole objects extraction
from background and it produces images of single complete objects
from videos or photos. The extracted images are used for calculating
the object visual features necessary for both indexing and retrieval
processes.
The segmentation algorithm is based on the cooperation among an
optical flow evaluation method, edge detection and region growing
procedures. The optical flow estimator belongs to the class of
differential methods. It permits to detect motions ranging from a
fraction of a pixel to a few pixels per frame, achieving good results in
presence of noise without the need of a filtering pre-processing stage
and includes a specialised model for moving object detection.
The first task of the presented method exploits the cues from
motion analysis for moving areas detection. Objects and background
are then refined using respectively edge detection and seeded region
growing procedures. All the tasks are iteratively performed until
objects and background are completely resolved.
The method has been applied to a variety of indoor and outdoor
scenes where objects of different type and shape are represented on
variously textured background.
Abstract: The various types of frequent pattern discovery
problem, namely, the frequent itemset, sequence and graph mining
problems are solved in different ways which are, however, in certain
aspects similar. The main approach of discovering such patterns can
be classified into two main classes, namely, in the class of the levelwise
methods and in that of the database projection-based methods.
The level-wise algorithms use in general clever indexing structures
for discovering the patterns. In this paper a new approach is proposed
for discovering frequent sequences and tree-like patterns efficiently
that is based on the level-wise issue. Because the level-wise
algorithms spend a lot of time for the subpattern testing problem, the
new approach introduces the idea of using automaton theory to solve
this problem.
Abstract: This paper presents a sensor-based motion planning algorithm for 3-DOF car-like robots with a nonholonomic constraint. Similar to the classic Bug family algorithms, the proposed algorithm enables the car-like robot to navigate in a completely unknown environment using only the range sensor information. The car-like robot uses the local range sensor view to determine the local path so that it moves towards the goal. To guarantee that the robot can approach the goal, the two modes of motion are repeated, termed motion-to-goal and wall-following. The motion-to-goal behavior lets the robot directly move toward the goal, and the wall-following behavior makes the robot circumnavigate the obstacle boundary until it meets the leaving condition. For each behavior, the nonholonomic motion for the car-like robot is planned in terms of the instantaneous turning radius. The proposed algorithm is implemented to the real robot and the experimental results show the performance of proposed algorithm.
Abstract: In this paper, we introduce an mobile agent framework
with proactive load balancing for ambient intelligence (AmI) environments.
One of the main obstacles of AmI is the scalability in
which the openness of AmI environment introduces dynamic resource
requirements on agencies. To mediate this scalability problem, our
framework proposes a load balancing module to proactively analyze
the resource consumption of network bandwidth and preferred agencies
to suggest the optimal communication method to its user. The
framework generally formulates an AmI environment that consists
of three main components: (1) mobile devices, (2) hosts or agencies,
and (3) directory service center (DSC). A preliminary implementation
was conducted with NetLogo and the experimental results show that
the proposed approach provides enhanced system performance by
minimizing the network utilization to provide users with responsive
services.
Abstract: Chinese Idioms are a type of traditional Chinese idiomatic
expressions with specific meanings and stereotypes structure
which are widely used in classical Chinese and are still common in
vernacular written and spoken Chinese today. Currently, Chinese
Idioms are retrieved in glossary with key character or key word in
morphology or pronunciation index that can not meet the need of
searching semantically. OCIRS is proposed to search the desired
idiom in the case of users only knowing its meaning without any key
character or key word. The user-s request in a sentence or phrase will
be grammatically analyzed in advance by word segmentation, key
word extraction and semantic similarity computation, thus can be
mapped to the idiom domain ontology which is constructed to provide
ample semantic relations and to facilitate description logics-based
reasoning for idiom retrieval. The experimental evaluation shows that
OCIRS realizes the function of searching idioms via semantics, obtaining
preliminary achievement as requested by the users.
Abstract: At present, dictionary attack has been the basic tool for
recovering key passwords. In order to avoid dictionary attack, users
purposely choose another character strings as passwords. According to
statistics, about 14% of users choose keys on a keyboard (Kkey, for
short) as passwords. This paper develops a framework system to attack
the password chosen from Kkeys and analyzes its efficiency. Within
this system, we build up keyboard rules using the adjacent and parallel
relationship among Kkeys and then use these Kkey rules to generate
password databases by depth-first search method. According to the
experiment results, we find the key space of databases derived from
these Kkey rules that could be far smaller than the password databases
generated within brute-force attack, thus effectively narrowing down
the scope of attack research. Taking one general Kkey rule, the
combinations in all printable characters (94 types) with Kkey adjacent
and parallel relationship, as an example, the derived key space is about
240 smaller than those in brute-force attack. In addition, we
demonstrate the method's practicality and value by successfully
cracking the access password to UNIX and PC using the password
databases created
Abstract: Mobile learning (M-learning) integrates mobile
devices and wireless computing technology to enhance the current
conventional learning system. However, there are constraints which
are affecting the implementation of platform and device independent
M-learning. The main aim of this research is to fulfill the following
main objectives: to develop platform independent mobile learning
tool (M-LT) for structured programming course, and evaluate its
effectiveness and usability using ADDIE instructional design model
(ISD) as M-LT life cycle. J2ME (Java 2 micro edition) and XML
(Extensible Markup Language) were used to develop platform
independent M-LT. It has two modules lecture materials and quizzes.
This study used Quasi experimental design to measure effectiveness
of the tool. Meanwhile, questionnaire is used to evaluate the usability
of the tool. Finally, the results show that the system was effective and
also usability evaluation was positive.
Abstract: Intrusion detection systems (IDS)are crucial components
of the security mechanisms of today-s computer systems.
Existing research on intrusion detection has focused on sequential
intrusions. However, intrusions can also be formed by concurrent
interactions of multiple processes. Some of the intrusions caused
by these interactions cannot be detected using sequential intrusion
detection methods. Therefore, there is a need for a mechanism that
views the distributed system as a whole. L-BIDS (Lattice-Based
Intrusion Detection System) is proposed to address this problem. In
the L-BIDS framework, a library of intrusions and distributed traces
are represented as lattices. Then these lattices are compared in order
to detect intrusions in the distributed traces.
Abstract: The pseudorandom number generators based on linear
feedback shift registers (LFSRs), are very quick, easy and secure in
the implementation of hardware and software. Thus they are very
popular and widely used. But LFSRs lead to fairly easy
cryptanalysis due to their completely linearity properties. In this
paper, we propose a stochastic generator, which is called Random
Feedback Shift Register (RFSR), using stochastic transformation
(Random block) with one-way and non-linearity properties.
Abstract: Metrics is the process by which numbers or symbols
are assigned to attributes of entities in the real world in such a way as
to describe them according to clearly defined rules. Software metrics
are instruments or ways to measuring all the aspect of software
product. These metrics are used throughout a software project to
assist in estimation, quality control, productivity assessment, and
project control. Object oriented software metrics focus on
measurements that are applied to the class and other characteristics.
These measurements convey the software engineer to the behavior of
the software and how changes can be made that will reduce
complexity and improve the continuing capability of the software.
Object oriented software metric can be classified in two types static
and dynamic. Static metrics are concerned with all the aspects of
measuring by static analysis of software and dynamic metrics are
concerned with all the measuring aspect of the software at run time.
Major work done before, was focusing on static metric. Also some
work has been done in the field of dynamic nature of the software
measurements. But research in this area is demanding for more work.
In this paper we give a set of dynamic metrics specifically for
polymorphism in object oriented system.
Abstract: Computational techniques derived from digital image processing are playing a significant role in the security and digital copyrights of multimedia and visual arts. This technology has the effect within the domain of computers. This research presents discrete M-band wavelet transform (MWT) and cosine transform (DCT) based watermarking algorithm by incorporating the principal component analysis (PCA). The proposed algorithm is expected to achieve higher perceptual transparency. Specifically, the developed watermarking scheme can successfully resist common signal processing, such as geometric distortions, and Gaussian noise. In addition, the proposed algorithm can be parameterized, thus resulting in more security. To meet these requirements, the image is transformed by a combination of MWT & DCT. In order to improve the security further, we randomize the watermark image to create three code books. During the watermark embedding, PCA is applied to the coefficients in approximation sub-band. Finally, first few component bands represent an excellent domain for inserting the watermark.