Abstract: Ants are fascinating creatures that demonstrate the
ability to find food and bring it back to their nest. Their ability as a
colony, to find paths to food sources has inspired the development of
algorithms known as Ant Colony Systems (ACS). The principle of
cooperation forms the backbone of such algorithms, commonly used
to find solutions to problems such as the Traveling Salesman
Problem (TSP). Ants communicate to each other through chemical
substances called pheromones. Modeling individual ants- ability to
manipulate this substance can help an ACS find the best solution.
This paper introduces a Dynamic Ant Colony System with threelevel
updates (DACS3) that enhance an existing ACS. Experiments
were conducted to observe single ant behavior in a colony of
Malaysian House Red Ants. Such behavior was incorporated into the
DACS3 algorithm. We benchmark the performance of DACS3 versus
DACS on TSP instances ranging from 14 to 100 cities. The result
shows that the DACS3 algorithm can achieve shorter distance in
most cases and also performs considerably faster than DACS.
Abstract: This paper used a fuzzy kohonen neural network for medical image segmentation. Image segmentation plays a important role in the many of medical imaging applications by automating or facilitating the diagnostic. The paper analyses the tumor by extraction of the features of (area, entropy, means and standard deviation).These measurements gives a description for a tumor.
Abstract: Motion capture devices have been utilized in
producing several contents, such as movies and video games. However,
since motion capture devices are expensive and inconvenient to use,
motions segmented from captured data was recycled and synthesized
to utilize it in another contents, but the motions were generally
segmented by contents producers in manual. Therefore, automatic
motion segmentation is recently getting a lot of attentions. Previous
approaches are divided into on-line and off-line, where on-line
approaches segment motions based on similarities between
neighboring frames and off-line approaches segment motions by
capturing the global characteristics in feature space. In this paper, we
propose a graph-based high-level motion segmentation method. Since
high-level motions consist of several repeated frames within temporal
distances, we consider all similarities among all frames within the
temporal distance. This is achieved by constructing a graph, where
each vertex represents a frame and the edges between the frames are
weighted by their similarity. Then, normalized cuts algorithm is used
to partition the constructed graph into several sub-graphs by globally
finding minimum cuts. In the experiments, the results using the
proposed method showed better performance than PCA-based method
in on-line and GMM-based method in off-line, as the proposed method
globally segment motions from the graph constructed based
similarities between neighboring frames as well as similarities among
all frames within temporal distances.
Abstract: Morphogenesis is the process that underpins the selforganised development and regeneration of biological systems. The ability to mimick morphogenesis in artificial systems has great potential for many engineering applications, including production of biological tissue, design of robust electronic systems and the co-ordination of parallel computing. Previous attempts to mimick these complex dynamics within artificial systems have relied upon the use of evolutionary algorithms that have limited their size and complexity. This paper will present some insight into the underlying dynamics of morphogenesis, then show how to, without the assistance of evolutionary algorithms, design cellular architectures that converge to complex patterns.
Abstract: In this study, a novel approach of image embedding is introduced. The proposed method consists of three main steps. First, the edge of the image is detected using Sobel mask filters. Second, the least significant bit LSB of each pixel is used. Finally, a gray level connectivity is applied using a fuzzy approach and the ASCII code is used for information hiding. The prior bit of the LSB represents the edged image after gray level connectivity, and the remaining six bits represent the original image with very little difference in contrast. The proposed method embeds three images in one image and includes, as a special case of data embedding, information hiding, identifying and authenticating text embedded within the digital images. Image embedding method is considered to be one of the good compression methods, in terms of reserving memory space. Moreover, information hiding within digital image can be used for security information transfer. The creation and extraction of three embedded images, and hiding text information is discussed and illustrated, in the following sections.
Abstract: XML is a markup language which is becoming the
standard format for information representation and data exchange. A
major purpose of XML is the explicit representation of the logical
structure of a document. Much research has been performed to
exploit logical structure of documents in information retrieval in
order to precisely extract user information need from large
collections of XML documents. In this paper, we describe an XML
information retrieval weighting scheme that tries to find the most
relevant elements in XML documents in response to a user query.
We present this weighting model for information retrieval systems
that utilize plausible inferences to infer the relevance of elements in
XML documents. We also add to this model the Dempster-Shafer
theory of evidence to express the uncertainty in plausible inferences
and Dempster-Shafer rule of combination to combine evidences
derived from different inferences.
Abstract: A new approach to promote the generalization ability
of neural networks is presented. It is based on the point of view of
fuzzy theory. This approach is implemented through shrinking or
magnifying the input vector, thereby reducing the difference between
training set and testing set. It is called “shrinking-magnifying
approach" (SMA). At the same time, a new algorithm; α-algorithm is
presented to find out the appropriate shrinking-magnifying-factor
(SMF) α and obtain better generalization ability of neural networks.
Quite a few simulation experiments serve to study the effect of SMA
and α-algorithm. The experiment results are discussed in detail, and
the function principle of SMA is analyzed in theory. The results of
experiments and analyses show that the new approach is not only
simpler and easier, but also is very effective to many neural networks
and many classification problems. In our experiments, the proportions
promoting the generalization ability of neural networks have even
reached 90%.
Abstract: In this paper a simple watermarking method for
color images is proposed. The proposed method is based on
watermark embedding for the histograms of the HSV planes
using visual cryptography watermarking. The method has
been proved to be robust for various image processing
operations such as filtering, compression, additive noise, and
various geometrical attacks such as rotation, scaling, cropping,
flipping, and shearing.
Abstract: This text studies glass bottle intelligent inspector
based machine vision instead of manual inspection. The system
structure is illustrated in detail in this paper. The text presents the
method based on watershed transform methods to segment the
possible defective regions and extract features of bottle wall by rules.
Then wavelet transform are used to exact features of bottle finish
from images. After extracting features, the fuzzy support vector
machine ensemble is putted forward as classifier. For ensuring that
the fuzzy support vector machines have good classification ability,
the GA based ensemble method is used to combining the several
fuzzy support vector machines. The experiments demonstrate that
using this inspector to inspect glass bottles, the accuracy rate may
reach above 97.5%.
Abstract: Both image steganography and image encryption have
advantages and disadvantages. Steganograhy allows us to hide a
desired image containing confidential information in a covered or
host image while image encryption is decomposing the desired image
to a non-readable, non-comprehended manner. The encryption
methods are usually much more robust than the steganographic ones.
However, they have a high visibility and would provoke the attackers
easily since it usually is obvious from an encrypted image that
something is hidden! The combination of steganography and
encryption will cover both of their weaknesses and therefore, it
increases the security. In this paper an image encryption method
based on sinc-convolution along with using an encryption key of 128
bit length is introduced. Then, the encrypted image is covered by a
host image using a modified version of JSteg steganography
algorithm. This method could be applied to almost all image formats
including TIF, BMP, GIF and JPEG. The experiment results show
that our method is able to hide a desired image with high security and
low visibility.
Abstract: The paper shows how the CASMAS modeling language,
and its associated pervasive computing architecture, can be
used to facilitate continuity of care by providing members of patientcentered
communities of care with a support to cooperation and
knowledge sharing through the usage of electronic documents and
digital devices. We consider a scenario of clearly fragmented care to
show how proper mechanisms can be defined to facilitate a better
integration of practices and information across heterogeneous care
networks. The scenario is declined in terms of architectural components
and cooperation-oriented mechanisms that make the support
reactive to the evolution of the context where these communities
operate.
Abstract: Cryptography provides the secure manner of
information transmission over the insecure channel. It authenticates
messages based on the key but not on the user. It requires a lengthy
key to encrypt and decrypt the sending and receiving the messages,
respectively. But these keys can be guessed or cracked. Moreover,
Maintaining and sharing lengthy, random keys in enciphering and
deciphering process is the critical problem in the cryptography
system. A new approach is described for generating a crypto key,
which is acquired from a person-s iris pattern. In the biometric field,
template created by the biometric algorithm can only be
authenticated with the same person. Among the biometric templates,
iris features can efficiently be distinguished with individuals and
produces less false positives in the larger population. This type of iris
code distribution provides merely less intra-class variability that aids
the cryptosystem to confidently decrypt messages with an exact
matching of iris pattern. In this proposed approach, the iris features
are extracted using multi resolution wavelets. It produces 135-bit iris
codes from each subject and is used for encrypting/decrypting the
messages. The autocorrelators are used to recall original messages
from the partially corrupted data produced by the decryption process.
It intends to resolve the repudiation and key management problems.
Results were analyzed in both conventional iris cryptography system
(CIC) and non-repudiation iris cryptography system (NRIC). It
shows that this new approach provides considerably high
authentication in enciphering and deciphering processes.
Abstract: In this paper we report a study aimed at determining
the most effective animation technique for representing ASL
(American Sign Language) finger-spelling. Specifically, in the study
we compare two commonly used 3D computer animation methods
(keyframe animation and motion capture) in order to ascertain which
technique produces the most 'accurate', 'readable', and 'close to
actual signing' (i.e. realistic) rendering of ASL finger-spelling. To
accomplish this goal we have developed 20 animated clips of fingerspelled
words and we have designed an experiment consisting of a
web survey with rating questions. 71 subjects ages 19-45 participated
in the study. Results showed that recognition of the words was
correlated with the method used to animate the signs. In particular,
keyframe technique produced the most accurate representation of the
signs (i.e., participants were more likely to identify the words
correctly in keyframed sequences rather than in motion captured
ones). Further, findings showed that the animation method had an
effect on the reported scores for readability and closeness to actual
signing; the estimated marginal mean readability and closeness was
greater for keyframed signs than for motion captured signs. To our
knowledge, this is the first study aimed at measuring and comparing
accuracy, readability and realism of ASL animations produced with
different techniques.
Abstract: To define or predict incipient motion in an alluvial
channel, most of the investigators use a standard or modified form of
Shields- diagram. Shields- diagram does give a process to determine
the incipient motion parameters but an iterative one. To design
properly (without iteration), one should have another equation for
resistance. Absence of a universal resistance equation also magnifies
the difficulties in defining the model. Neural network technique,
which is particularly useful in modeling a complex processes, is
presented as a tool complimentary to modeling incipient motion.
Present work develops a neural network model employing the RBF
network to predict the average velocity u and water depth y based on
the experimental data on incipient condition. Based on the model,
design curves have been presented for the field application.
Abstract: Neural processors have shown good results for
detecting a certain character in a given input matrix. In this paper, a
new idead to speed up the operation of neural processors for character
detection is presented. Such processors are designed based on cross
correlation in the frequency domain between the input matrix and the
weights of neural networks. This approach is developed to reduce the
computation steps required by these faster neural networks for the
searching process. The principle of divide and conquer strategy is
applied through image decomposition. Each image is divided into
small in size sub-images and then each one is tested separately by
using a single faster neural processor. Furthermore, faster character
detection is obtained by using parallel processing techniques to test the
resulting sub-images at the same time using the same number of faster
neural networks. In contrast to using only faster neural processors, the
speed up ratio is increased with the size of the input image when using
faster neural processors and image decomposition. Moreover, the
problem of local subimage normalization in the frequency domain is
solved. The effect of image normalization on the speed up ratio of
character detection is discussed. Simulation results show that local
subimage normalization through weight normalization is faster than
subimage normalization in the spatial domain. The overall speed up
ratio of the detection process is increased as the normalization of
weights is done off line.
Abstract: Near-infrared (NIR) spectroscopy is a widely used
method for material identification for laboratory and industrial applications.
While standard spectrometers only allow measurements at
one sampling point at a time, NIR Spectral Imaging techniques can
measure, in real-time, both the size and shape of an object as well as
identify the material the object is made of. The online classification
and sorting of recovered paper with NIR Spectral Imaging (SI)
is used with success in the paper recycling industry throughout
Europe. Recently, the globalisation of the recycling material streams
caused that water-based flexographic-printed newspapers mainly from
UK and Italy appear also in central Europe. These flexo-printed
newspapers are not sufficiently de-inkable with the standard de-inking
process originally developed for offset-printed paper. This de-inking
process removes the ink from recovered paper and is the fundamental
processing step to produce high-quality paper from recovered paper.
Thus, the flexo-printed newspapers are a growing problem for the
recycling industry as they reduce the quality of the produced paper
if their amount exceeds a certain limit within the recovered paper
material.
This paper presents the results of a research project for the
development of an automated entry inspection system for recovered
paper that was jointly conducted by CTR AG (Austria) and PTS
Papiertechnische Stiftung (Germany). Within the project an NIR
SI prototype for the identification of flexo-printed newspaper has
been developed. The prototype can identify and sort out flexoprinted
newspapers in real-time and achieves a detection accuracy
for flexo-printed newspaper of over 95%. NIR SI, the technology the
prototype is based on, allows the development of inspection systems
for incoming goods in a paper production facility as well as industrial
sorting systems for recovered paper in the recycling industry in the
near future.
Abstract: The one of best robust search technique on large scale
search area is heuristic and meta heuristic approaches. Especially in
issue that the exploitation of combinatorial status in the large scale
search area prevents the solution of the problem via classical
calculating methods, so such problems is NP-complete. in this
research, the problem of winner determination in combinatorial
auctions have been formulated and by assessing older heuristic
functions, we solve the problem by using of genetic algorithm and
would show that this new method would result in better performance
in comparison to other heuristic function such as simulated annealing
greedy approach.
Abstract: This paper presents an adaptive differentiator
of sequential data based on the adaptive control theory. The
algorithm is applied to detect moving objects by estimating a
temporal gradient of sequential data at a specified pixel. We
adopt two nonlinear intensity functions to reduce the influence
of noises. The derivatives of the nonlinear intensity functions
are estimated by an adaptive observer with σ-modification
update law.
Abstract: This paper presents a review on vision aided systems
and proposes an approach for visual rehabilitation using stereo vision
technology. The proposed system utilizes stereo vision, image
processing methodology and a sonification procedure to support
blind navigation. The developed system includes a wearable
computer, stereo cameras as vision sensor and stereo earphones, all
moulded in a helmet. The image of the scene infront of visually
handicapped is captured by the vision sensors. The captured images
are processed to enhance the important features in the scene in front,
for navigation assistance. The image processing is designed as model
of human vision by identifying the obstacles and their depth
information. The processed image is mapped on to musical stereo
sound for the blind-s understanding of the scene infront. The
developed method has been tested in the indoor and outdoor
environments and the proposed image processing methodology is
found to be effective for object identification.
Abstract: The Petri net tool INA is a well known tool by the
Petri net community. However, it lacks a graphical environment to
cerate and analyse INA models. Building a modelling tool for the
design and analysis from scratch (for INA tool for example) is
generally a prohibitive task. Meta-Modelling approach is useful to
deal with such problems since it allows the modelling of the
formalisms themselves. In this paper, we propose an approach based
on the combined use of Meta-modelling and Graph Grammars to
automatically generate a visual modelling tool for INA for analysis
purposes. In our approach, the UML Class diagram formalism is
used to define a meta-model of INA models. The meta-modelling
tool ATOM3 is used to generate a visual modelling tool according to
the proposed INA meta-model. We have also proposed a graph
grammar to automatically generate INA description of the
graphically specified Petri net models. This allows the user to avoid
the errors when this description is done manually. Then the INA tool
is used to perform the simulation and the analysis of the resulted INA
description. Our environment is illustrated through an example.