Abstract: This paper includes two novel techniques for skew
estimation of binary document images. These algorithms are based on
connected component analysis and Hough transform. Both these
methods focus on reducing the amount of input data provided to
Hough transform. In the first method, referred as word centroid
approach, the centroids of selected words are used for skew detection.
In the second method, referred as dilate & thin approach, the selected
characters are blocked and dilated to get word blocks and later
thinning is applied. The final image fed to Hough transform has the
thinned coordinates of word blocks in the image. The methods have
been successful in reducing the computational complexity of Hough
transform based skew estimation algorithms. Promising experimental
results are also provided to prove the effectiveness of the proposed
methods.
Abstract: In this paper we use exponential particle swarm
optimization (EPSO) to cluster data. Then we compare between
(EPSO) clustering algorithm which depends on exponential variation
for the inertia weight and particle swarm optimization (PSO)
clustering algorithm which depends on linear inertia weight. This
comparison is evaluated on five data sets. The experimental results
show that EPSO clustering algorithm increases the possibility to find
the optimal positions as it decrease the number of failure. Also show
that (EPSO) clustering algorithm has a smaller quantization error
than (PSO) clustering algorithm, i.e. (EPSO) clustering algorithm
more accurate than (PSO) clustering algorithm.
Abstract: Testing is an activity that is required both in the
development and maintenance of the software development life cycle
in which Integration Testing is an important activity. Integration
testing is based on the specification and functionality of the software
and thus could be called black-box testing technique. The purpose of
integration testing is testing integration between software
components. In function or system testing, the concern is with overall
behavior and whether the software meets its functional specifications
or performance characteristics or how well the software and
hardware work together. This explains the importance and necessity
of IT for which the emphasis is on interactions between modules and
their interfaces. Software errors should be discovered early during
IT to reduce the costs of correction. This paper introduces a new type
of integration error, presenting an overview of Integration Testing
techniques with comparison of each technique and also identifying
which technique detects what type of error.
Abstract: This paper proposes a method that discovers time series event patterns from textual data with time information. The patterns are composed of sequences of events and each event is extracted from the textual data, where an event is characteristic content included in the textual data such as a company name, an action, and an impression of a customer. The method introduces 7 types of time constraints based on the analysis of the textual data. The method also evaluates these constraints when the frequency of a time series event pattern is calculated. We can flexibly define the time constraints for interesting combinations of events and can discover valid time series event patterns which satisfy these conditions. The paper applies the method to daily business reports collected by a sales force automation system and verifies its effectiveness through numerical experiments.
Abstract: The traditional software product and process metrics
are neither suitable nor sufficient in measuring the complexity of
software components, which ultimately is necessary for quality and
productivity improvement within organizations adopting CBSE.
Researchers have proposed a wide range of complexity metrics for
software systems. However, these metrics are not sufficient for
components and component-based system and are restricted to the
module-oriented systems and object-oriented systems. In this
proposed study it is proposed to find the complexity of the JavaBean
Software Components as a reflection of its quality and the component
can be adopted accordingly to make it more reusable. The proposed
metric involves only the design issues of the component and does not
consider the packaging and the deployment complexity. In this way,
the software components could be kept in certain limit which in turn
help in enhancing the quality and productivity.
Abstract: This paper proposes a hybrid method for eyes localization
in facial images. The novelty is in combining techniques
that utilise colour, edge and illumination cues to improve accuracy.
The method is based on the observation that eye regions have dark
colour, high density of edges and low illumination as compared
to other parts of face. The first step in the method is to extract
connected regions from facial images using colour, edge density and
illumination cues separately. Some of the regions are then removed
by applying rules that are based on the general geometry and shape
of eyes. The remaining connected regions obtained through these
three cues are then combined in a systematic way to enhance the
identification of the candidate regions for the eyes. The geometry
and shape based rules are then applied again to further remove the
false eye regions. The proposed method was tested using images from
the PICS facial images database. The proposed method has 93.7%
and 87% accuracies for initial blobs extraction and final eye detection
respectively.
Abstract: Graph based image segmentation techniques are
considered to be one of the most efficient segmentation techniques
which are mainly used as time & space efficient methods for real
time applications. How ever, there is need to focus on improving the
quality of segmented images obtained from the earlier graph based
methods. This paper proposes an improvement to the graph based
image segmentation methods already described in the literature. We
contribute to the existing method by proposing the use of a weighted
Euclidean distance to calculate the edge weight which is the key
element in building the graph. We also propose a slight modification
of the segmentation method already described in the literature, which
results in selection of more prominent edges in the graph. The
experimental results show the improvement in the segmentation
quality as compared to the methods that already exist, with a slight
compromise in efficiency.
Abstract: Recently, in some places, optical-fibre access
networks have been used with GPON technology belonging to
organizations (in most cases public bodies) that act as neutral
operators. These operators simultaneously provide network services
to various telecommunications operators that offer integrated voice,
data and television services. This situation creates new problems
related to quality of service, since the interests of the users are
intermingled with the interests of the operators. In this paper, we
analyse this problem and consider solutions that make it possible to
provide guaranteed quality of service for voice over IP, data services
and interactive digital television.
Abstract: An end-member selection method for spectral unmixing that is based on Particle Swarm Optimization (PSO) is developed in this paper. The algorithm uses the K-means clustering algorithm and a method of dynamic selection of end-members subsets to find the appropriate set of end-members for a given set of multispectral images. The proposed algorithm has been successfully applied to test image sets from various platforms such as LANDSAT 5 MSS and NOAA's AVHRR. The experimental results of the proposed algorithm are encouraging. The influence of different values of the algorithm control parameters on performance is studied. Furthermore, the performance of different versions of PSO is also investigated.
Abstract: “Web of Trust" is one of the recognized goals for
Web 2.0. It aims to make it possible for the people to take
responsibility for what they publish on the web, including
organizations, businesses and individual users. These objectives,
among others, drive most of the technologies and protocols recently
standardized by the governing bodies. One of the great advantages of
Web infrastructure is decentralization of publication. The primary
motivation behind Web 2.0 is to assist the people to add contents for
Collective Intelligence (CI) while providing mechanisms to link
content with people for evaluations and accountability of
information. Such structure of contents will interconnect users and
contents so that users can use contents to find participants and vice
versa. This paper proposes conceptual information storage and
linking model, based on decentralized information structure, that
links contents and people together. The model uses FOAF, Atom,
RDF and RDFS and can be used as a blueprint to develop Web 2.0
applications for any e-domain. However, primary target for this
paper is online trust evaluation domain. The proposed model targets
to assist the individuals to establish “Web of Trust" in online trust
domain.
Abstract: This paper presents a new feature based dense stereo
matching algorithm to obtain the dense disparity map via dynamic
programming. After extraction of some proper features, we use some
matching constraints such as epipolar line, disparity limit, ordering
and limit of directional derivative of disparity as well. Also, a coarseto-
fine multiresolution strategy is used to decrease the search space
and therefore increase the accuracy and processing speed. The
proposed method links the detected feature points into the chains and
compares some of the feature points from different chains, to
increase the matching speed. We also employ color stereo matching
to increase the accuracy of the algorithm. Then after feature
matching, we use the dynamic programming to obtain the dense
disparity map. It differs from the classical DP methods in the stereo
vision, since it employs sparse disparity map obtained from the
feature based matching stage. The DP is also performed further on a
scan line, between any matched two feature points on that scan line.
Thus our algorithm is truly an optimization method. Our algorithm
offers a good trade off in terms of accuracy and computational
efficiency. Regarding the results of our experiments, the proposed
algorithm increases the accuracy from 20 to 70%, and reduces the
running time of the algorithm almost 70%.
Abstract: DNA microarrays allow the measurement of expression levels for a large number of genes, perhaps all genes of an organism, within a number of different experimental samples. It is very much important to extract biologically meaningful information from this huge amount of expression data to know the current state of the cell because most cellular processes are regulated by changes in gene expression. Association rule mining techniques are helpful to find association relationship between genes. Numerous association rule mining algorithms have been developed to analyze and associate this huge amount of gene expression data. This paper focuses on some of the popular association rule mining algorithms developed to analyze gene expression data.
Abstract: The colors of the human skin represent a special
category of colors, because they are distinctive from the colors of
other natural objects. This category is found as a cluster in color
spaces, and the skin color variations between people are mostly due
to differences in the intensity. Besides, the face detection based on
skin color detection is a faster method as compared to other
techniques. In this work, we present a system to track faces by
carrying out skin color detection in four different color spaces: HSI,
YCbCr, YES and RGB. Once some skin color regions have been
detected for each color space, we label each and get some
characteristics such as size and position. We are supposing that a face
is located in one the detected regions. Next, we compare and employ
a polling strategy between labeled regions to determine the final
region where the face effectively has been detected and located.
Abstract: To increase reliability of face recognition system, the
system must be able to distinguish real face from a copy of face such
as a photograph. In this paper, we propose a fast and memory efficient
method of live face detection for embedded face recognition system,
based on the analysis of the movement of the eyes. We detect eyes in
sequential input images and calculate variation of each eye region to
determine whether the input face is a real face or not. Experimental
results show that the proposed approach is competitive and promising
for live face detection.
Abstract: Dealing with hundreds of features in character
recognition systems is not unusual. This large number of features
leads to the increase of computational workload of recognition
process. There have been many methods which try to remove
unnecessary or redundant features and reduce feature dimensionality.
Besides because of the characteristics of Farsi scripts, it-s not
possible to apply other languages algorithms to Farsi directly. In this
paper some methods for feature subset selection using genetic
algorithms are applied on a Farsi optical character recognition (OCR)
system. Experimental results show that application of genetic
algorithms (GA) to feature subset selection in a Farsi OCR results in
lower computational complexity and enhanced recognition rate.
Abstract: In this work we develop an object extraction method
and propose efficient algorithms for object motion characterization.
The set of proposed tools serves as a basis for development of objectbased
functionalities for manipulation of video content. The
estimators by different algorithms are compared in terms of quality
and performance and tested on real video sequences. The proposed
method will be useful for the latest standards of encoding and
description of multimedia content – MPEG4 and MPEG7.
Abstract: In a particular case of behavioural model reduction by ANNs, a validity domain shortening has been found. In mechanics, as in other domains, the notion of validity domain allows the engineer to choose a valid model for a particular analysis or simulation. In the study of mechanical behaviour for a cantilever beam (using linear and non-linear models), Multi-Layer Perceptron (MLP) Backpropagation (BP) networks have been applied as model reduction technique. This reduced model is constructed to be more efficient than the non-reduced model. Within a less extended domain, the ANN reduced model estimates correctly the non-linear response, with a lower computational cost. It has been found that the neural network model is not able to approximate the linear behaviour while it does approximate the non-linear behaviour very well. The details of the case are provided with an example of the cantilever beam behaviour modelling.
Abstract: We propose a formal framework for the specification of
the behavior of a system of agents, as well as those of the constituting
agents. This framework allows us to model each agent-s effectoric
capability including its interactions with the other agents. We also
provide an algorithm based on Milner-s "observation equivalence" to
derive an agent-s perception of its task domain situations from its
effectoric capability, and use "system computations" to model the
coordinated efforts of the agents in the system . Formal definitions
of the concept of "behavior equivalence" of two agents and that of
system computations equivalence for an agent are also provided.
Abstract: Model-checking tools such as Symbolic Model Verifier
(SMV) and NuSMV are available for checking hardware designs.
These tools can automatically check the formal legitimacy of a
design. However, NuSMV is too low level for describing a complete
hardware design. It is therefore necessary to translate the system
definition, as designed in a language such as Verilog or VHDL, into
a language such as NuSMV for validation. In this paper, we present
a meta hardware description language, Melasy, that contains a code
generator for existing hardware description languages (HDLs) and
languages for model checking that solve this problem.
Abstract: Although Model Driven Architecture has taken
successful steps toward model-based software development, this
approach still faces complex situations and ambiguous questions
while applying to real world software systems. One of these
questions - which has taken the most interest and focus - is how
model transforms between different abstraction levels, MDA
proposes. In this paper, we propose an approach based on Story
Driven Modeling and Aspect Oriented Programming to ease these
transformations. Service Oriented Architecture is taken as the target
model to test the proposed mechanism in a functional system.
Service Oriented Architecture and Model Driven Architecture [1]
are both considered as the frontiers of their own domain in the
software world. Following components - which was the greatest step
after object oriented - SOA is introduced, focusing on more
integrated and automated software solutions. On the other hand - and
from the designers' point of view - MDA is just initiating another
evolution. MDA is considered as the next big step after UML in
designing domain.