Abstract: There are many classical algorithms for finding
routing in FPGA. But Using DNA computing we can solve the routes
efficiently and fast. The run time complexity of DNA algorithms is
much less than other classical algorithms which are used for solving
routing in FPGA. The research in DNA computing is in a primary
level. High information density of DNA molecules and massive
parallelism involved in the DNA reactions make DNA computing a
powerful tool. It has been proved by many research accomplishments
that any procedure that can be programmed in a silicon computer can
be realized as a DNA computing procedure. In this paper we have
proposed two tier approaches for the FPGA routing solution. First,
geometric FPGA detailed routing task is solved by transforming it
into a Boolean satisfiability equation with the property that any
assignment of input variables that satisfies the equation specifies a
valid routing. Satisfying assignment for particular route will result in
a valid routing and absence of a satisfying assignment implies that
the layout is un-routable. In second step, DNA search algorithm is
applied on this Boolean equation for solving routing alternatives
utilizing the properties of DNA computation. The simulated results
are satisfactory and give the indication of applicability of DNA
computing for solving the FPGA Routing problem.
Abstract: Signature amortization schemes have been introduced
for authenticating multicast streams, in which, a single signature is
amortized over several packets. The hash value of each packet is
computed, some hash values are appended to other packets, forming
what is known as hash chain. These schemes divide the stream into
blocks, each block is a number of packets, the signature packet in
these schemes is either the first or the last packet of the block.
Amortization schemes are efficient solutions in terms of computation
and communication overhead, specially in real-time environment.
The main effictive factor of amortization schemes is it-s hash chain
construction. Some studies show that signing the first packet of each
block reduces the receiver-s delay and prevents DoS attacks, other
studies show that signing the last packet reduces the sender-s delay.
To our knowledge, there is no studies that show which is better, to
sign the first or the last packet in terms of authentication probability
and resistance to packet loss.
In th is paper we will introduce another scheme for authenticating
multicast streams that is robust against packet loss, reduces the
overhead, and prevents the DoS attacks experienced by the receiver
in the same time. Our scheme-The Multiple Connected Chain signing
the First packet (MCF) is to append the hash values of specific
packets to other packets,then append some hashes to the signature
packet which is sent as the first packet in the block. This scheme
is aspecially efficient in terms of receiver-s delay. We discuss and
evaluate the performance of our proposed scheme against those that
sign the last packet of the block.
Abstract: In this paper, an efficient method for personal identification based on the pattern of human iris is proposed. It is composed of image acquisition, image preprocessing to make a flat iris then it is converted into eigeniris and decision is carried out using only reduction of iris in one dimension. By comparing the eigenirises it is determined whether two irises are similar. The results show that proposed method is quite effective.
Abstract: This paper presents an evolutionary method for designing
electronic circuits and numerical methods associated with
monitoring systems. The instruments described here have been used
in studies of weather and climate changes due to global warming, and
also in medical patient supervision. Genetic Programming systems
have been used both for designing circuits and sensors, and also for
determining sensor parameters. The authors advance the thesis that
the software side of such a system should be written in computer
languages with a strong mathematical and logic background in order
to prevent software obsolescence, and achieve program correctness.
Abstract: Code mobility technologies attract more and more developers and consumers. Numerous domains are concerned, many platforms are developed and interest applications are realized. However, developing good software products requires modeling, analyzing and proving steps. The choice of models and modeling languages is so critical on these steps. Formal tools are powerful in analyzing and proving steps. However, poorness of classical modeling language to model mobility requires proposition of new models. The objective of this paper is to provide a specific formalism “Coloured Reconfigurable Nets" and to show how this one seems to be adequate to model different kinds of code mobility.
Abstract: Short-Term Load Forecasting (STLF) plays an important role for the economic and secure operation of power systems. In this paper, Continuous Genetic Algorithm (CGA) is employed to evolve the optimum large neural networks structure and connecting weights for one-day ahead electric load forecasting problem. This study describes the process of developing three layer feed-forward large neural networks for load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. We find good performance for the large neural networks. The proposed methodology gives lower percent errors all the time. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Abstract: This paper presents a predictive model of sensor readings for mobile robot. The model predicts sensor readings for given time horizon based on current sensor readings and velocities of wheels assumed for this horizon. Similar models for such anticipation have been proposed in the literature. The novelty of the model presented in the paper comes from the fact that its structure takes into account physical phenomena and is not just a black box, for example a neural network. From this point of view it may be regarded as a semi-phenomenological model. The model is developed for the Khepera robot, but after certain modifications, it may be applied for any robot with distance sensors such as infrared or ultrasonic sensors.
Abstract: Utilizing echoic intension and distribution from different organs and local details of human body, ultrasonic image can catch important medical pathological changes, which unfortunately may be affected by ultrasonic speckle noise. A feature preserving ultrasonic image denoising and edge enhancement scheme is put forth, which includes two terms: anisotropic diffusion and edge enhancement, controlled by the optimum smoothing time. In this scheme, the anisotropic diffusion is governed by the local coordinate transformation and the first and the second order normal derivatives of the image, while the edge enhancement is done by the hyperbolic tangent function. Experiments on real ultrasonic images indicate effective preservation of edges, local details and ultrasonic echoic bright strips on denoising by our scheme.
Abstract: The software evolution control requires a deep
understanding of the changes and their impact on different system
heterogeneous artifacts. And an understanding of descriptive
knowledge of the developed software artifacts is a prerequisite
condition for the success of the evolutionary process.
The implementation of an evolutionary process is to make changes
more or less important to many heterogeneous software artifacts such
as source code, analysis and design models, unit testing, XML
deployment descriptors, user guides, and others. These changes can
be a source of degradation in functional, qualitative or behavioral
terms of modified software. Hence the need for a unified approach
for extraction and representation of different heterogeneous artifacts
in order to ensure a unified and detailed description of heterogeneous
software artifacts, exploitable by several software tools and allowing
to responsible for the evolution of carry out the reasoning change
concerned.
Abstract: The problems with high complexity had been the challenge in combinatorial problems. Due to the none-determined and polynomial characteristics, these problems usually face to unreasonable searching budget. Hence combinatorial optimizations attracted numerous researchers to develop better algorithms. In recent academic researches, most focus on developing to enhance the conventional evolutional algorithms and facilitate the local heuristics, such as VNS, 2-opt and 3-opt. Despite the performances of the introduction of the local strategies are significant, however, these improvement cannot improve the performance for solving the different problems. Therefore, this research proposes a meta-heuristic evolutional algorithm which can be applied to solve several types of problems. The performance validates BBEA has the ability to solve the problems even without the design of local strategies.
Abstract: Nowadays, more engineering systems are using some
kind of Artificial Intelligence (AI) for the development of their
processes. Some well-known AI techniques include artificial neural
nets, fuzzy inference systems, and neuro-fuzzy inference systems
among others. Furthermore, many decision-making applications base
their intelligent processes on Fuzzy Logic; due to the Fuzzy
Inference Systems (FIS) capability to deal with problems that are
based on user knowledge and experience. Also, knowing that users
have a wide variety of distinctiveness, and generally, provide
uncertain data, this information can be used and properly processed
by a FIS. To properly consider uncertainty and inexact system input
values, FIS normally use Membership Functions (MF) that represent
a degree of user satisfaction on certain conditions and/or constraints.
In order to define the parameters of the MFs, the knowledge from
experts in the field is very important. This knowledge defines the MF
shape to process the user inputs and through fuzzy reasoning and
inference mechanisms, the FIS can provide an “appropriate" output.
However an important issue immediately arises: How can it be
assured that the obtained output is the optimum solution? How can it
be guaranteed that each MF has an optimum shape? A viable solution
to these questions is through the MFs parameter optimization. In this
Paper a novel parameter optimization process is presented. The
process for FIS parameter optimization consists of the five simple
steps that can be easily realized off-line. Here the proposed process
of FIS parameter optimization it is demonstrated by its
implementation on an Intelligent Interface section dealing with the
on-line customization / personalization of internet portals applied to
E-commerce.
Abstract: Robust stability and performance are the two most
basic features of feedback control systems. The harmonic balance
analysis technique enables to analyze the stability of limit cycles
arising from a neural network control based system operating over
nonlinear plants. In this work a robust stability analysis based on the
harmonic balance is presented and applied to a neural based control
of a non-linear binary distillation column with unstructured
uncertainty. We develop ways to describe uncertainty in the form of
neglected nonlinear dynamics and high harmonics for the plant and
controller respectively. Finally, conclusions about the performance of
the neural control system are discussed using the Nyquist stability
margin together with the structured singular values of the uncertainty
as a robustness measure.
Abstract: Most known methods for measuring the structural similarity of document structures are based on, e.g., tag measures, path metrics and tree measures in terms of their DOM-Trees. Other methods measures the similarity in the framework of the well known vector space model. In contrast to these we present a new approach to measuring the structural similarity of web-based documents represented by so called generalized trees which are more general than DOM-Trees which represent only directed rooted trees.We will design a new similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as strings of linear integers, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments to solve a novel and challenging problem: Measuring the structural similarity of generalized trees. More precisely, we first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based documents.
Abstract: Devices in a pervasive computing system (PCS) are characterized by their context-awareness. It permits them to provide proactively adapted services to the user and applications. To do so, context must be well understood and modeled in an appropriate form which enhance its sharing between devices and provide a high level of abstraction. The most interesting methods for modeling context are those based on ontology however the majority of the proposed methods fail in proposing a generic ontology for context which limit their usability and keep them specific to a particular domain. The adaptation task must be done automatically and without an explicit intervention of the user. Devices of a PCS must acquire some intelligence which permits them to sense the current context and trigger the appropriate service or provide a service in a better suitable form. In this paper we will propose a generic service ontology for context modeling and a context-aware service adaptation based on a service oriented definition of context.
Abstract: Biometrics, which refers to identifying an individual
based on his or her physiological or behavioral characteristics, has
the capability to reliably distinguish between an authorized person
and an imposter. Signature verification systems can be categorized as
offline (static) and online (dynamic). This paper presents a neural
network based recognition of offline handwritten signatures system
that is trained with low-resolution scanned signature images.
Abstract: In this paper a new robust digital image watermarking
algorithm based on the Complex Wavelet Transform is proposed. This
technique embeds different parts of a watermark into different blocks
of an image under the complex wavelet domain. To increase security
of the method, two chaotic maps are employed, one map is used to
determine the blocks of the host image for watermark embedding,
and another map is used to encrypt the watermark image. Simulation
results are presented to demonstrate the effectiveness of the proposed
algorithm.
Abstract: An adaptive Chinese hand-talking system is presented
in this paper. By analyzing the 3 data collecting strategies for new
users, the adaptation framework including supervised and unsupervised
adaptation methods is proposed. For supervised adaptation,
affinity propagation (AP) is used to extract exemplar subsets, and enhanced
maximum a posteriori / vector field smoothing (eMAP/VFS)
is proposed to pool the adaptation data among different models. For
unsupervised adaptation, polynomial segment models (PSMs) are
used to help hidden Markov models (HMMs) to accurately label
the unlabeled data, then the "labeled" data together with signerindependent
models are inputted to MAP algorithm to generate
signer-adapted models. Experimental results show that the proposed
framework can execute both supervised adaptation with small amount
of labeled data and unsupervised adaptation with large amount
of unlabeled data to tailor the original models, and both achieve
improvements on the performance of recognition rate.
Abstract: Email has become a fast and cheap means of online
communication. The main threat to email is Unsolicited Bulk Email
(UBE), commonly called spam email. The current work aims at
identification of unigrams in more than 2700 UBE that advertise
body-enhancement drugs. The identification is based on the
requirement that the unigram is neither present in dictionary, nor is a
slang term. The motives of the paper are many fold. This is an
attempt to analyze spamming behaviour and employment of wordmutation
technique. On the side-lines of the paper, we have
attempted to better understand the spam, the slang and their interplay.
The problem has been addressed by employing Tokenization
technique and Unigram BOW model. We found that the non-lexicon
words constitute nearly 66% of total number of lexis of corpus
whereas non-slang words constitute nearly 2.4% of non-lexicon
words. Further, non-lexicon non-slang unigrams composed of 2
lexicon words, form more than 71% of the total number of such
unigrams. To the best of our knowledge, this is the first attempt to
analyze usage of non-lexicon non-slang unigrams in any kind of
UBE.
Abstract: This paper proposes a stroke extraction method for use in off-line signature verification. After giving a brief overview of the current ongoing researches an algorithm is introduced for detecting and following strokes in static images of signatures. Problems like the handling of junctions and variations in line width and line intensity are discussed in detail. Results are validated by both using an existing on-line signature database and by employing image registration methods.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. After a preprocessing
step, the documents are typically represented as large sparse vectors.
When training classifiers on large collections of documents, both the
time and memory restrictions can be quite prohibitive. This justifies
the application of feature selection methods to reduce the
dimensionality of the document-representation vector. In this paper,
we present three feature selection methods: Information Gain,
Support Vector Machine feature selection called (SVM_FS) and
Genetic Algorithm with SVM (called GA_SVM). We show that the
best results were obtained with GA_SVM method for a relatively
small dimension of the feature vector.