Abstract: Wavelet transform provides several important
characteristics which can be used in a texture analysis and
classification. In this work, an efficient texture classification method,
which combines concepts from wavelet and co-occurrence matrices,
is presented. An Euclidian distance classifier is used to evaluate the
various methods of classification. A comparative study is essential to
determine the ideal method. Using this conjecture, we developed a
novel feature set for texture classification and demonstrate its
effectiveness
Abstract: Random Oracle Model (ROM) is an effective method
for measuring the practical security of cryptograph. In this paper, we
try to use it into information hiding system (IHS). Because IHS has its
own properties, the ROM must be modified if it is used into IHS.
Firstly, we fully discuss why and how to modify each part of ROM
respectively. The main changes include: 1) Divide the attacks that IHS
may be suffered into two phases and divide the attacks of each phase
into several kinds. 2) Distinguish Oracles and Black-boxes clearly. 3)
Define Oracle and four Black-boxes that IHS used. 4) Propose the
formalized adversary model. And 5) Give the definition of judge.
Secondly, based on ROM of IHS, the security against known original
cover attack (KOCA-KOCA-security) is defined. Then, we give an
actual information hiding scheme and prove that it is
KOCA-KOCA-secure. Finally, we conclude the paper and propose the
open problems of further research.
Abstract: The main mission of Ezilla is to provide a friendly
interface to access the virtual machine and quickly deploy the high
performance computing environment. Ezilla has been developed by
Pervasive Computing Team at National Center for High-performance
Computing (NCHC). Ezilla integrates the Cloud middleware,
virtualization technology, and Web-based Operating System (WebOS)
to form a virtual computer in distributed computing environment. In
order to upgrade the dataset and speedup, we proposed the sensor
observation system to deal with a huge amount of data in the
Cassandra database. The sensor observation system is based on the
Ezilla to store sensor raw data into distributed database. We adopt the
Ezilla Cloud service to create virtual machines and login into virtual
machine to deploy the sensor observation system. Integrating the
sensor observation system with Ezilla is to quickly deploy experiment
environment and access a huge amount of data with distributed
database that support the replication mechanism to protect the data
security.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.
Abstract: Business rules and data warehouse are concepts and
technologies that impact a wide variety of organizational tasks. In
general, each area has evolved independently, impacting application
development and decision-making. Generating knowledge from data
warehouse is a complex process. This paper outlines an approach to
ease import of information and knowledge from a data warehouse
star schema through an inference class of business rules. The paper
utilizes the Oracle database for illustrating the working of the
concepts. The star schema structure and the business rules are stored
within a relational database. The approach is explained through a
prototype in Oracle-s PL/SQL Server Pages.
Abstract: This paper proposes a novel solution for optimizing
the size and communication overhead of a distributed multiagent
system without compromising the performance. The proposed approach
addresses the challenges of scalability especially when the
multiagent system is large. A modified spectral clustering technique
is used to partition a large network into logically related clusters.
Agents are assigned to monitor dedicated clusters rather than monitor
each device or node. The proposed scalable multiagent system is
implemented using JADE (Java Agent Development Environment)
for a large power system. The performance of the proposed topologyindependent
decentralized multiagent system and the scalable multiagent
system is compared by comprehensively simulating different
fault scenarios. The time taken for reconfiguration, the overall computational
complexity, and the communication overhead incurred are
computed. The results of these simulations show that the proposed
scalable multiagent system uses fewer agents efficiently, makes faster
decisions to reconfigure when a fault occurs, and incurs significantly
less communication overhead.
Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.
Abstract: We propose a formal framework for the specification of
the behavior of a system of agents, as well as those of the constituting
agents. This framework allows us to model each agent-s effectoric
capability including its interactions with the other agents. We also
provide an algorithm based on Milner-s "observation equivalence" to
derive an agent-s perception of its task domain situations from its
effectoric capability, and use "system computations" to model the
coordinated efforts of the agents in the system . Formal definitions
of the concept of "behavior equivalence" of two agents and that of
system computations equivalence for an agent are also provided.
Abstract: In this study, an OCR system for segmentation,
feature extraction and recognition of Ottoman Scripts has been
developed using handwritten characters. Detection of handwritten
characters written by humans is a difficult process. Segmentation and
feature extraction stages are based on geometrical feature analysis,
followed by the chain code transformation of the main strokes of
each character. The output of segmentation is well-defined segments
that can be fed into any classification approach. The classes of main
strokes are identified through left-right Hidden Markov Model
(HMM).
Abstract: Over the past decade, mobile has experienced a
revolution that will ultimately change the way we communicate.All
these technologies have a common denominator exploitation of
computer information systems, but their operation can be tedious
because of problems with heterogeneous data sources.To overcome
the problems of heterogeneous data sources, we propose to use a
technique of adding an extra layer interfacing applications of
management or supervision at the different data sources.This layer
will be materialized by the implementation of a mediator between
different host applications and information systems frequently used
hierarchical and relational manner such that the heterogeneity is
completely transparent to the VoIP platform.
Abstract: The quest of providing more secure identification
system has led to a rise in developing biometric systems. Dorsal
hand vein pattern is an emerging biometric which has attracted the
attention of many researchers, of late. Different approaches have
been used to extract the vein pattern and match them. In this work,
Principle Component Analysis (PCA) which is a method that has
been successfully applied on human faces and hand geometry is
applied on the dorsal hand vein pattern. PCA has been used to obtain
eigenveins which is a low dimensional representation of vein pattern
features. Low cost CCD cameras were used to obtain the vein
images. The extraction of the vein pattern was obtained by applying
morphology. We have applied noise reduction filters to enhance the
vein patterns. The system has been successfully tested on a database
of 200 images using a threshold value of 0.9. The results obtained are
encouraging.
Abstract: Finding the minimal logical functions has important applications in the design of logical circuits. This task is solved by many different methods but, frequently, they are not suitable for a computer implementation. We briefly summarise the well-known Quine-McCluskey method, which gives a unique procedure of computing and thus can be simply implemented, but, even for simple examples, does not guarantee an optimal solution. Since the Petrick extension of the Quine-McCluskey method does not give a generally usable method for finding an optimum for logical functions with a high number of values, we focus on interpretation of the result of the Quine-McCluskey method and show that it represents a set covering problem that, unfortunately, is an NP-hard combinatorial problem. Therefore it must be solved by heuristic or approximation methods. We propose an approach based on genetic algorithms and show suitable parameter settings.
Abstract: Support Vector Domain Description (SVDD) is one of the best-known one-class support vector learning methods, in which one tries the strategy of using balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. As all kernel-based learning algorithms its performance depends heavily on the proper choice of the kernel parameter. This paper proposes a new approach to select kernel's parameter based on maximizing the distance between both gravity centers of normal and abnormal classes, and at the same time minimizing the variance within each class. The performance of the proposed algorithm is evaluated on several benchmarks. The experimental results demonstrate the feasibility and the effectiveness of the presented method.
Abstract: In this paper, a simple active contour based visual
tracking algorithm is presented for outdoor AGV application which is
currently under development at the USM robotic research group
(URRG) lab. The presented algorithm is computationally low cost
and able to track road boundaries in an image sequence and can
easily be implemented on available low cost hardware. The proposed
algorithm used an active shape modeling using the B-spline
deformable template and recursive curve fitting method to track the
current orientation of the road.
Abstract: Decrease in hardware costs and advances in computer
networking technologies have led to increased interest in the use of
large-scale parallel and distributed computing systems. One of the
biggest issues in such systems is the development of effective
techniques/algorithms for the distribution of the processes/load of a
parallel program on multiple hosts to achieve goal(s) such as
minimizing execution time, minimizing communication delays,
maximizing resource utilization and maximizing throughput.
Substantive research using queuing analysis and assuming job
arrivals following a Poisson pattern, have shown that in a multi-host
system the probability of one of the hosts being idle while other host
has multiple jobs queued up can be very high. Such imbalances in
system load suggest that performance can be improved by either
transferring jobs from the currently heavily loaded hosts to the lightly
loaded ones or distributing load evenly/fairly among the hosts .The
algorithms known as load balancing algorithms, helps to achieve the
above said goal(s). These algorithms come into two basic categories -
static and dynamic. Whereas static load balancing algorithms (SLB)
take decisions regarding assignment of tasks to processors based on
the average estimated values of process execution times and
communication delays at compile time, Dynamic load balancing
algorithms (DLB) are adaptive to changing situations and take
decisions at run time.
The objective of this paper work is to identify qualitative
parameters for the comparison of above said algorithms. In future this
work can be extended to develop an experimental environment to
study these Load balancing algorithms based on comparative
parameters quantitatively.
Abstract: In the present work, we propose a new technique to
enhance the learning capabilities and reduce the computation
intensity of a competitive learning multi-layered neural network
using the K-means clustering algorithm. The proposed model use
multi-layered network architecture with a back propagation learning
mechanism. The K-means algorithm is first applied to the training
dataset to reduce the amount of samples to be presented to the neural
network, by automatically selecting an optimal set of samples. The
obtained results demonstrate that the proposed technique performs
exceptionally in terms of both accuracy and computation time when
applied to the KDD99 dataset compared to a standard learning
schema that use the full dataset.
Abstract: Least Significant Bit (LSB) technique is the earliest
developed technique in watermarking and it is also the most simple,
direct and common technique. It essentially involves embedding the
watermark by replacing the least significant bit of the image data with
a bit of the watermark data. The disadvantage of LSB is that it is not
robust against attacks. In this study intermediate significant bit (ISB)
has been used in order to improve the robustness of the watermarking
system. The aim of this model is to replace the watermarked image
pixels by new pixels that can protect the watermark data against
attacks and at the same time keeping the new pixels very close to the
original pixels in order to protect the quality of watermarked image.
The technique is based on testing the value of the watermark pixel
according to the range of each bit-plane.
Abstract: Scheduling of diversified service requests in
distributed computing is a critical design issue. Cloud is a type of
parallel and distributed system consisting of a collection of
interconnected and virtual computers. It is not only the clusters and
grid but also it comprises of next generation data centers. The paper
proposes an initial heuristic algorithm to apply modified ant colony
optimization approach for the diversified service allocation and
scheduling mechanism in cloud paradigm. The proposed optimization
method is aimed to minimize the scheduling throughput to service all
the diversified requests according to the different resource allocator
available under cloud computing environment.
Abstract: This paper explores university course timetabling
problem. There are several characteristics that make scheduling and
timetabling problems particularly difficult to solve: they have huge
search spaces, they are often highly constrained, they require
sophisticated solution representation schemes, and they usually
require very time-consuming fitness evaluation routines. Thus
standard evolutionary algorithms lack of efficiency to deal with
them. In this paper we have proposed a memetic algorithm that
incorporates the problem specific knowledge such that most of
chromosomes generated are decoded into feasible solutions.
Generating vast amount of feasible chromosomes makes the progress
of search process possible in a time efficient manner. Experimental
results exhibit the advantages of the developed Hybrid Genetic
Algorithm than the standard Genetic Algorithm.
Abstract: This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.