Abstract: In this paper, we present two new ranking and unranking
algorithms for k-ary trees represented by x-sequences in Gray
code order. These algorithms are based on a gray code generation algorithm
developed by Ahrabian et al.. In mentioned paper, a recursive
backtracking generation algorithm for x-sequences corresponding to
k-ary trees in Gray code was presented. This generation algorithm
is based on Vajnovszki-s algorithm for generating binary trees in
Gray code ordering. Up to our knowledge no ranking and unranking
algorithms were given for x-sequences in this ordering. we present
ranking and unranking algorithms with O(kn2) time complexity for
x-sequences in this Gray code ordering
Abstract: The vertex connectivity of a graph is the smallest number of vertices whose deletion separates the graph or makes it trivial. This work is devoted to the problem of vertex connectivity test of graphs in a distributed environment based on a general and a constructive approach. The contribution of this paper is threefold. First, using a preconstructed spanning tree of the considered graph, we present a protocol to test whether a given graph is 2-connected using only local knowledge. Second, we present an encoding of this protocol using graph relabeling systems. The last contribution is the implementation of this protocol in the message passing model. For a given graph G, where M is the number of its edges, N the number of its nodes and Δ is its degree, our algorithms need the following requirements: The first one uses O(Δ×N2) steps and O(Δ×logΔ) bits per node. The second one uses O(Δ×N2) messages, O(N2) time and O(Δ × logΔ) bits per node. Furthermore, the studied network is semi-anonymous: Only the root of the pre-constructed spanning tree needs to be identified.
Abstract: This paper introduces a measure of similarity between
two clusterings of the same dataset produced by two different
algorithms, or even the same algorithm (K-means, for instance, with
different initializations usually produce different results in clustering
the same dataset). We then apply the measure to calculate the
similarity between pairs of clusterings, with special interest directed
at comparing the similarity between various machine clusterings and
human clustering of datasets. The similarity measure thus can be used
to identify the best (in terms of most similar to human) clustering
algorithm for a specific problem at hand. Experimental results
pertaining to the text categorization problem of a Portuguese corpus
(wherein a translation-into-English approach is used) are presented, as well as results on the well-known benchmark IRIS dataset. The
significance and other potential applications of the proposed measure
are discussed.
Abstract: Traditional parallel single string matching algorithms
are always based on PRAM computation model. Those algorithms
concentrate on the cost optimal design and the theoretical speed.
Based on the distributed string matching algorithm proposed by
CHEN, a practical distributed string matching algorithm architecture
is proposed in this paper. And also an improved single string matching
algorithm based on a variant Boyer-Moore algorithm is presented. We
implement our algorithm on the above architecture and the
experiments prove that it is really practical and efficient on distributed
memory machine. Its computation complexity is O(n/p + m), where n
is the length of the text, and m is the length of the pattern, and p is the
number of the processors.
Abstract: Electronic Systems are the core of everyday lives.
They form an integral part in financial networks, mass transit,
telephone systems, power plants and personal computers. Electronic
systems are increasingly based on complex VLSI (Very Large Scale
Integration) integrated circuits. Initial electronic design automation is
concerned with the design and production of VLSI systems. The next
important step in creating a VLSI circuit is Physical Design. The
input to the physical design is a logical representation of the system
under design. The output of this step is the layout of a physical
package that optimally or near optimally realizes the logical
representation. Physical design problems are combinatorial in nature
and of large problem sizes. Darwin observed that, as variations are
introduced into a population with each new generation, the less-fit
individuals tend to extinct in the competition of basic necessities.
This survival of fittest principle leads to evolution in species. The
objective of the Genetic Algorithms (GA) is to find an optimal
solution to a problem .Since GA-s are heuristic procedures that can
function as optimizers, they are not guaranteed to find the optimum,
but are able to find acceptable solutions for a wide range of
problems. This survey paper aims at a study on Efficient Algorithms
for VLSI Physical design and observes the common traits of the
superior contributions.
Abstract: In this paper, we present a methodology for finding
authoritative researchers by analyzing academic Web sites. We show
a case study in which we concentrate on a set of Czech computer
science departments- Web sites. We analyze the relations between
them via hyperlinks and find the most important ones using several
common ranking algorithms. We then examine the contents of the
research papers present on these sites and determine the most
authoritative Czech authors.
Abstract: Employing a recently introduced unified adaptive filter
theory, we show how the performance of a large number of important
adaptive filter algorithms can be predicted within a general framework
in nonstationary environment. This approach is based on energy conservation
arguments and does not need to assume a Gaussian or white
distribution for the regressors. This general performance analysis can
be used to evaluate the mean square performance of the Least Mean
Square (LMS) algorithm, its normalized version (NLMS), the family
of Affine Projection Algorithms (APA), the Recursive Least Squares
(RLS), the Data-Reusing LMS (DR-LMS), its normalized version
(NDR-LMS), the Block Least Mean Squares (BLMS), the Block
Normalized LMS (BNLMS), the Transform Domain Adaptive Filters
(TDAF) and the Subband Adaptive Filters (SAF) in nonstationary
environment. Also, we establish the general expressions for the
steady-state excess mean square in this environment for all these
adaptive algorithms. Finally, we demonstrate through simulations that
these results are useful in predicting the adaptive filter performance.
Abstract: Wavelet neural networks (WNNs) have emerged as a vital alternative to the vastly studied multilayer perceptrons (MLPs) since its first implementation. In this paper, we applied various clustering algorithms, namely, K-means (KM), Fuzzy C-means (FCM), symmetry-based K-means (SBKM), symmetry-based Fuzzy C-means (SBFCM) and modified point symmetry-based K-means (MPKM) clustering algorithms in choosing the translation parameter of a WNN. These modified WNNs are further applied to the heterogeneous cancer classification using benchmark microarray data and were compared against the conventional WNN with random initialization method. Experimental results showed that a WNN classifier with the MPKM algorithm is more precise than the conventional WNN as well as the WNNs with other clustering algorithms.
Abstract: Smart Grids employ wireless sensor networks for
their control and monitoring. Sensors are characterized by limitations
in the processing power, energy supply and memory spaces, which
require a particular attention on the design of routing and data
management algorithms.
Since most routing algorithms for sensor networks, focus on
finding energy efficient paths to prolong the lifetime of sensor
networks, the power of sensors on efficient paths depletes quickly,
and consequently sensor networks become incapable of monitoring
events from some parts of their target areas. In consequence, the
design of routing protocols should consider not only energy
efficiency paths, but also energy efficient algorithms in general.
In this paper we propose an energy efficient routing protocol for
wireless sensor networks without the support of any location
information system. The reliability and the efficiency of this protocol
have been demonstrated by simulation studies where we compare
them to the legacy protocols. Our simulation results show that these
algorithms scale well with network size and density.
Abstract: The Model for Knowledge Base of Computational Objects
(KBCO model) has been successfully applied to represent the
knowledge of human like Plane Geometry, Physical, Calculus. However,
the original model cannot easyly apply in inorganic chemistry
field because of the knowledge specific problems. So, the aim of
this article is to introduce how we extend the Computional Object
(Com-Object) in KBCO model, kinds of fact, problems model, and
inference algorithms to develop a program for solving problems
in inorganic chemistry. Our purpose is to develop the application
that can help students in their study inorganic chemistry at schools.
This application was built successful by using Maple, C# and WPF
technology. It can solve automatically problems and give human
readable solution agree with those writting by students and teachers.
Abstract: Artificial Bee Colony (ABC) algorithm is a relatively new swarm intelligence technique for clustering. It produces higher
quality clusters compared to other population-based algorithms but with poor energy efficiency, cluster quality consistency and typically slower in convergence speed. Inspired by energy saving foraging behavior of natural honey bees this paper presents a Quality and Quantity Aware Artificial Bee Colony (Q2ABC) algorithm to improve quality of cluster identification, energy efficiency and convergence speed of the original ABC. To evaluate the performance of Q2ABC algorithm, experiments were conducted on a suite of ten benchmark UCI datasets. The results demonstrate Q2ABC outperformed ABC and K-means algorithm in the quality of clusters delivered.
Abstract: Robustness is one of the primary performance criteria for an Intelligent Video Surveillance (IVS) system. One of the key factors in enhancing the robustness of dynamic video analysis is,providing accurate and reliable means for shadow detection. If left undetected, shadow pixels may result in incorrect object tracking and classification, as it tends to distort localization and measurement information. Most of the algorithms proposed in literature are computationally expensive; some to the extent of equalling computational requirement of motion detection. In this paper, the homogeneity property of shadows is explored in a novel way for shadow detection. An adaptive division image (which highlights homogeneity property of shadows) analysis followed by a relatively simpler projection histogram analysis for penumbra suppression is the key novelty in our approach.
Abstract: QoS Routing aims to find paths between senders and
receivers satisfying the QoS requirements of the application which
efficiently using the network resources and underlying routing
algorithm to be able to find low-cost paths that satisfy given QoS
constraints. The problem of finding least-cost routing is known to be
NP hard or complete and some algorithms have been proposed to
find a near optimal solution. But these heuristics or algorithms either
impose relationships among the link metrics to reduce the complexity
of the problem which may limit the general applicability of the
heuristic, or are too costly in terms of execution time to be applicable
to large networks. In this paper, we analyzed two algorithms namely
Characterized Delay Constrained Routing (CDCR) and Optimized
Delay Constrained Routing (ODCR). The CDCR algorithm dealt an
approach for delay constrained routing that captures the trade-off
between cost minimization and risk level regarding the delay
constraint. The ODCR which uses an adaptive path weight function
together with an additional constraint imposed on the path cost, to
restrict search space and hence ODCR finds near optimal solution in
much quicker time.
Abstract: Ground-level tropospheric ozone is one of the air
pollutants of most concern. It is mainly produced by photochemical
processes involving nitrogen oxides and volatile organic compounds
in the lower parts of the atmosphere. Ozone levels become
particularly high in regions close to high ozone precursor emissions
and during summer, when stagnant meteorological conditions with
high insolation and high temperatures are common.
In this work, some results of a study about urban ozone
distribution patterns in the city of Badajoz, which is the largest and
most industrialized city in Extremadura region (southwest Spain) are
shown. Fourteen sampling campaigns, at least one per month, were
carried out to measure ambient air ozone concentrations, during
periods that were selected according to favourable conditions to
ozone production, using an automatic portable analyzer.
Later, to evaluate the ozone distribution at the city, the measured
ozone data were analyzed using geostatistical techniques. Thus, first,
during the exploratory analysis of data, it was revealed that they were
distributed normally, which is a desirable property for the subsequent
stages of the geostatistical study. Secondly, during the structural
analysis of data, theoretical spherical models provided the best fit for
all monthly experimental variograms. The parameters of these
variograms (sill, range and nugget) revealed that the maximum
distance of spatial dependence is between 302-790 m and the
variable, air ozone concentration, is not evenly distributed in reduced
distances. Finally, predictive ozone maps were derived for all points
of the experimental study area, by use of geostatistical algorithms
(kriging). High prediction accuracy was obtained in all cases as
cross-validation showed. Useful information for hazard assessment
was also provided when probability maps, based on kriging
interpolation and kriging standard deviation, were produced.
Abstract: In this paper optimization of routing in ad-hoc
networks is surveyed and a new method for reducing the complexity
of routing algorithms is suggested. Using binary matrices for each
node in the network and updating it once the routing is done, helps
nodes to stop repeating the routing protocols in each data transfer.
The algorithm suggested can reduce the complexity of routing to the
least amount possible.
Abstract: This paper proposes an architectural and graphical
user interface (GUI) design of a traditional Thai musical instrument
application for tablet computers for practicing “Ranaad Ek" which is
a trough-resonated keyboard percussion instrument. The application
provides percussion methods for a player as real as a physical
instrument. The application consists of two playing modes. The first
mode is free playing, a player can freely multi touches on wooden bar
to produce instrument sounds. The second mode is practicing mode
that guilds the player to follow percussions and rhythms of practice
songs. The application has achieved requirements and specifications.
Abstract: This paper includes two novel techniques for skew
estimation of binary document images. These algorithms are based on
connected component analysis and Hough transform. Both these
methods focus on reducing the amount of input data provided to
Hough transform. In the first method, referred as word centroid
approach, the centroids of selected words are used for skew detection.
In the second method, referred as dilate & thin approach, the selected
characters are blocked and dilated to get word blocks and later
thinning is applied. The final image fed to Hough transform has the
thinned coordinates of word blocks in the image. The methods have
been successful in reducing the computational complexity of Hough
transform based skew estimation algorithms. Promising experimental
results are also provided to prove the effectiveness of the proposed
methods.
Abstract: In this paper, we propose a hybrid machine learning
system based on Genetic Algorithm (GA) and Support Vector
Machines (SVM) for stock market prediction. A variety of indicators
from the technical analysis field of study are used as input features.
We also make use of the correlation between stock prices of different
companies to forecast the price of a stock, making use of technical
indicators of highly correlated stocks, not only the stock to be
predicted. The genetic algorithm is used to select the set of most
informative input features from among all the technical indicators.
The results show that the hybrid GA-SVM system outperforms the
stand alone SVM system.
Abstract: This paper aims to present a survey of object
recognition/classification methods based on image moments. We
review various types of moments (geometric moments, complex
moments) and moment-based invariants with respect to various
image degradations and distortions (rotation, scaling, affine
transform, image blurring, etc.) which can be used as shape
descriptors for classification. We explain a general theory how to
construct these invariants and show also a few of them in explicit
forms. We review efficient numerical algorithms that can be used
for moment computation and demonstrate practical examples of
using moment invariants in real applications.
Abstract: Case-Based Reasoning (CBR) is one of machine
learning algorithms for problem solving and learning that caught a lot
of attention over the last few years. In general, CBR is composed of
four main phases: retrieve the most similar case or cases, reuse the
case to solve the problem, revise or adapt the proposed solution, and
retain the learned cases before returning them to the case base for
learning purpose. Unfortunately, in many cases, this retain process
causes the uncontrolled case base growth. The problem affects
competence and performance of CBR systems. This paper proposes
competence-based maintenance method based on deletion policy
strategy for CBR. There are three main steps in this method. Step 1,
formulate problems. Step 2, determine coverage and reachability set
based on coverage value. Step 3, reduce case base size. The results
obtained show that this proposed method performs better than the
existing methods currently discussed in literature.