Abstract: The main objective of this paper is applying a
comparison between the Wolf Pack Search (WPS) as a newly
introduced intelligent algorithm with several other known algorithms
including Particle Swarm Optimization (PSO), Shuffled Frog
Leaping (SFL), Binary and Continues Genetic algorithms. All
algorithms are applied on two benchmark cost functions. The aim is
to identify the best algorithm in terms of more speed and accuracy in
finding the solution, where speed is measured in terms of function
evaluations. The simulation results show that the SFL algorithm with
less function evaluations becomes first if the simulation time is
important, while if accuracy is the significant issue, WPS and PSO
would have a better performance.
Abstract: Distributed wireless sensor network consist on several
scattered nodes in a knowledge area. Those sensors have as its only
power supplies a pair of batteries that must let them live up to five
years without substitution. That-s why it is necessary to develop
some power aware algorithms that could save battery lifetime as
much as possible. In this is document, a review of power aware
design for sensor nodes is presented. As example of implementations,
some resources and task management, communication, topology
control and routing protocols are named.
Abstract: The transformation of vocal characteristics aims at
modifying voice such that the intelligibility of aphonic voice is
increased or the voice characteristics of a speaker (source speaker) to
be perceived as if another speaker (target speaker) had uttered it. In
this paper, the current state-of-the-art voice characteristics
transformation methodology is reviewed. Special emphasis is placed
on voice transformation methodology and issues for improving the
transformed speech quality in intelligibility and naturalness are
discussed. In particular, it is suggested to use the modulation theory
of speech as a base for research on high quality voice transformation.
This approach allows one to separate linguistic, expressive, organic
and perspective information of speech, based on an analysis of how
they are fused when speech is produced. Therefore, this theory
provides the fundamentals not only for manipulating non-linguistic,
extra-/paralinguistic and intra-linguistic variables for voice
transformation, but also for paving the way for easily transposing the
existing voice transformation methods to emotion-related voice
quality transformation and speaking style transformation. From the
perspectives of human speech production and perception, the popular
voice transformation techniques are described and classified them
based on the underlying principles either from the speech production
or perception mechanisms or from both. In addition, the advantages
and limitations of voice transformation techniques and the
experimental manipulation of vocal cues are discussed through
examples from past and present research. Finally, a conclusion and
road map are pointed out for more natural voice transformation
algorithms in the future.
Abstract: Prime Factorization based on Quantum approach in
two phases has been performed. The first phase has been achieved at
Quantum computer and the second phase has been achieved at the
classic computer (Post Processing). At the second phase the goal is to
estimate the period r of equation xrN ≡ 1 and to find the prime factors
of the composite integer N in classic computer. In this paper we
present a method based on Randomized Approach for estimation the
period r with a satisfactory probability and the composite integer N
will be factorized therefore with the Randomized Approach even the
gesture of the period is not exactly the real period at least we can find
one of the prime factors of composite N. Finally we present some
important points for designing an Emulator for Quantum Computer
Simulation.
Abstract: Ontology Matching is a task needed in various applica-tions, for example for comparison or merging purposes. In literature,many algorithms solving the matching problem can be found, butmost of them do not consider instances at all. Mappings are deter-mined by calculating the string-similarity of labels, by recognizinglinguistic word relations (synonyms, subsumptions etc.) or by ana-lyzing the (graph) structure. Due to the facts that instances are oftenmodeled within the ontology and that the set of instances describesthe meaning of the concepts better than their meta information,instances should definitely be incorporated into the matching process.In this paper several novel instance-based matching algorithms arepresented which enhance the quality of matching results obtainedwith common concept-based methods. Different kinds of formalismsare use to classify concepts on account of their instances and finallyto compare the concepts directly.KeywordsInstances, Ontology Matching, Semantic Web
Abstract: Trihalomethanes (THMs) were among the first
disinfection byproducts to be discovered in chlorinated water. The
substances form during a reaction between chlorine and organic
matter in the water. Trihalomethanes are suspected to have negative
effects on birth such as, low birth weight, intrauterine growth
retardation in term births, as well as gestational age and preterm
delivery. There are also some evidences showing these by-products to
be mutagenic and carcinogenic, the greatest amount of evidence being
related to the bladder cancer. However, there exist inconsistencies
regarding such effects of THMs as different studies have provided
different results in this regard. The aim of the present study is to
provide a review of the related researches about the above mentioned
health effects of THMs.
Abstract: This paper explores university course timetabling
problem. There are several characteristics that make scheduling and
timetabling problems particularly difficult to solve: they have huge
search spaces, they are often highly constrained, they require
sophisticated solution representation schemes, and they usually
require very time-consuming fitness evaluation routines. Thus
standard evolutionary algorithms lack of efficiency to deal with
them. In this paper we have proposed a memetic algorithm that
incorporates the problem specific knowledge such that most of
chromosomes generated are decoded into feasible solutions.
Generating vast amount of feasible chromosomes makes the progress
of search process possible in a time efficient manner. Experimental
results exhibit the advantages of the developed Hybrid Genetic
Algorithm than the standard Genetic Algorithm.
Abstract: Bagging and boosting are among the most popular resampling ensemble methods that generate and combine a diversity of classifiers using the same learning algorithm for the base-classifiers. Boosting algorithms are considered stronger than bagging on noisefree data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using a voting methodology of bagging and boosting ensembles with 10 subclassifiers in each one. We performed a comparison with simple bagging and boosting ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique was the most accurate.
Abstract: Decrease in hardware costs and advances in computer
networking technologies have led to increased interest in the use of
large-scale parallel and distributed computing systems. One of the
biggest issues in such systems is the development of effective
techniques/algorithms for the distribution of the processes/load of a
parallel program on multiple hosts to achieve goal(s) such as
minimizing execution time, minimizing communication delays,
maximizing resource utilization and maximizing throughput.
Substantive research using queuing analysis and assuming job
arrivals following a Poisson pattern, have shown that in a multi-host
system the probability of one of the hosts being idle while other host
has multiple jobs queued up can be very high. Such imbalances in
system load suggest that performance can be improved by either
transferring jobs from the currently heavily loaded hosts to the lightly
loaded ones or distributing load evenly/fairly among the hosts .The
algorithms known as load balancing algorithms, helps to achieve the
above said goal(s). These algorithms come into two basic categories -
static and dynamic. Whereas static load balancing algorithms (SLB)
take decisions regarding assignment of tasks to processors based on
the average estimated values of process execution times and
communication delays at compile time, Dynamic load balancing
algorithms (DLB) are adaptive to changing situations and take
decisions at run time.
The objective of this paper work is to identify qualitative
parameters for the comparison of above said algorithms. In future this
work can be extended to develop an experimental environment to
study these Load balancing algorithms based on comparative
parameters quantitatively.
Abstract: Support Vector Domain Description (SVDD) is one of the best-known one-class support vector learning methods, in which one tries the strategy of using balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. As all kernel-based learning algorithms its performance depends heavily on the proper choice of the kernel parameter. This paper proposes a new approach to select kernel's parameter based on maximizing the distance between both gravity centers of normal and abnormal classes, and at the same time minimizing the variance within each class. The performance of the proposed algorithm is evaluated on several benchmarks. The experimental results demonstrate the feasibility and the effectiveness of the presented method.
Abstract: Finding the minimal logical functions has important applications in the design of logical circuits. This task is solved by many different methods but, frequently, they are not suitable for a computer implementation. We briefly summarise the well-known Quine-McCluskey method, which gives a unique procedure of computing and thus can be simply implemented, but, even for simple examples, does not guarantee an optimal solution. Since the Petrick extension of the Quine-McCluskey method does not give a generally usable method for finding an optimum for logical functions with a high number of values, we focus on interpretation of the result of the Quine-McCluskey method and show that it represents a set covering problem that, unfortunately, is an NP-hard combinatorial problem. Therefore it must be solved by heuristic or approximation methods. We propose an approach based on genetic algorithms and show suitable parameter settings.
Abstract: This work deals with unsupervised image deblurring.
We present a new deblurring procedure on images provided by lowresolution
synthetic aperture radar (SAR) or simply by multimedia in
presence of multiplicative (speckle) or additive noise, respectively.
The method we propose is defined as a two-step process. First, we
use an original technique for noise reduction in wavelet domain.
Then, the learning of a Kohonen self-organizing map (SOM) is
performed directly on the denoised image to take out it the blur. This
technique has been successfully applied to real SAR images, and the
simulation results are presented to demonstrate the effectiveness of
the proposed algorithms.
Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.
Abstract: Sorting appears the most attention among all computational tasks over the past years because sorted data is at the heart of many computations. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. Many parallel sorting algorithms have been investigated for a variety of parallel computer architectures. In this paper, three parallel sorting algorithms have been implemented and compared in terms of their overall execution time. The algorithms implemented are the odd-even transposition sort, parallel merge sort and parallel rank sort. Cluster of Workstations or Windows Compute Cluster has been used to compare the algorithms implemented. The C# programming language is used to develop the sorting algorithms. The MPI (Message Passing Interface) library has been selected to establish the communication and synchronization between processors. The time complexity for each parallel sorting algorithm will also be mentioned and analyzed.
Abstract: This paper proposes new algorithms for the computeraided
design and manufacture (CAD/CAM) of 3D woven multi-layer
textile structures. Existing commercial CAD/CAM systems are often
restricted to the design and manufacture of 2D weaves. Those
CAD/CAM systems that do support the design and manufacture of
3D multi-layer weaves are often limited to manual editing of design
paper grids on the computer display and weave retrieval from stored
archives. This complex design activity is time-consuming, tedious
and error-prone and requires considerable experience and skill of a
technical weaver. Recent research reported in the literature has
addressed some of the shortcomings of commercial 3D multi-layer
weave CAD/CAM systems. However, earlier research results have
shown the need for further work on weave specification, weave
generation, yarn path editing and layer binding. Analysis of 3D
multi-layer weaves in this research has led to the design and
development of efficient and robust algorithms for the CAD/CAM of
3D woven multi-layer textile structures. The resulting algorithmically
generated weave designs can be used as a basis for lifting plans that
can be loaded onto looms equipped with electronic shedding
mechanisms for the CAM of 3D woven multi-layer textile structures.
Abstract: Real-time 3D applications have to guarantee
interactive rendering speed. There is a restriction for the number of
polygons which is rendered due to performance of a graphics hardware
or graphics algorithms. Generally, the rendering performance will be
drastically increased when handling only the dynamic 3d models,
which is much fewer than the static ones. Since shapes and colors of
the static objects don-t change when the viewing direction is fixed, the
information can be reused. We render huge amounts of polygon those
cannot handled by conventional rendering techniques in real-time by
using a static object image and merging it with rendering result of the
dynamic objects. The performance must be decreased as a
consequence of updating the static object image including removing
an static object that starts to move, re-rending the other static objects
being overlapped by the moving ones. Based on visibility of the object
beginning to move, we can skip the updating process. As a result, we
enhance rendering performance and reduce differences of rendering
speed between each frame. Proposed method renders total
200,000,000 polygons that consist of 500,000 dynamic polygons and
the rest are static polygons in about 100 frames per second.
Abstract: To successfully provide a fast FIR filter with FTT algorithms, overlapped-save algorithms can be used to lower the computational complexity and achieve the desired real-time processing. As the length of the input block increases in order to improve the efficiency, a larger volume of zero padding will greatly increase the computation length of the FFT. In this paper, we use the overlapped block digital filtering to construct a parallel structure. As long as the down-sampling (or up-sampling) factor is an exact multiple lengths of the impulse response of a FIR filter, we can process the input block by using a parallel structure and thus achieve a low-complex fast FIR filter with overlapped-save algorithms. With a long filter length, the performance and the throughput of the digital filtering system will also be greatly enhanced.
Abstract: Ant colony based routing algorithms are known to
grantee the packet delivery, but they suffer from the huge overhead
of control messages which are needed to discover the route. In this
paper we utilize the network nodes positions to group the nodes
in connected clusters. We use clusters-heads only on forwarding
the route discovery control messages. Our simulations proved that
the new algorithm has decreased the overhead dramatically without
affecting the delivery rate.
Abstract: In this paper we present a new method for coin
identification. The proposed method adopts a hybrid scheme using
Eigenvalues of covariance matrix, Circular Hough Transform (CHT)
and Bresenham-s circle algorithm. The statistical and geometrical
properties of the small and large Eigenvalues of the covariance
matrix of a set of edge pixels over a connected region of support are
explored for the purpose of circular object detection. Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze
zero elements and contain only a small number of non-zero elements,
they provide an advantage of matrix storage space and computational
time. Neighborhood suppression scheme is used to find the valid
Hough peaks. The accurate position of the circumference pixels is
identified using Raster scan algorithm which uses geometrical
symmetry property. After finding circular objects, the proposed
method uses the texture on the surface of the coins called texton,
which are unique properties of coins, refers to the fundamental micro
structure in generic natural images. This method has been tested on
several real world images including coin and non-coin images. The
performance is also evaluated based on the noise withstanding
capability.