Abstract: Graph has become increasingly important in modeling
complicated structures and schemaless data such as proteins, chemical
compounds, and XML documents. Given a graph query, it is desirable
to retrieve graphs quickly from a large database via graph-based
indices. Different from the existing methods, our approach, called
VFM (Vertex to Frequent Feature Mapping), makes use of vertices
and decision features as the basic indexing feature. VFM constructs
two mappings between vertices and frequent features to answer graph
queries. The VFM approach not only provides an elegant solution to
the graph indexing problem, but also demonstrates how database
indexing and query processing can benefit from data mining,
especially frequent pattern mining. The results show that the proposed
method not only avoids the enumeration method of getting subgraphs
of query graph, but also effectively reduces the subgraph isomorphism
tests between the query graph and graphs in candidate answer set in
verification stage.
Abstract: Many Wireless Sensor Network (WSN) applications necessitate secure multicast services for the purpose of broadcasting delay sensitive data like video files and live telecast at fixed time-slot. This work provides a novel method to deal with end-to-end delay and drop rate of packets. Opportunistic Routing chooses a link based on the maximum probability of packet delivery ratio. Null Key Generation helps in authenticating packets to the receiver. Markov Decision Process based Adaptive Scheduling algorithm determines the time slot for packet transmission. Both theoretical analysis and simulation results show that the proposed protocol ensures better performance in terms of packet delivery ratio, average end-to-end delay and normalized routing overhead.
Abstract: Everyday the usages of the Internet increase and simply a world of the data become accessible. Network providers do not want to let the provided services to be used in harmful or terrorist affairs, so they used a variety of methods to protect the special regions from the harmful data. One of the most important methods is supposed to be the firewall. Firewall stops the transfer of such packets through several ways, but in some cases they do not use firewall because of its blind packet stopping, high process power needed and expensive prices. Here we have proposed a method to find a discriminate function to distinguish between usual packets and harmful ones by the statistical processing on the network router logs. So an administrator can alarm to the user. This method is very fast and can be used simply in adjacent with the Internet routers.
Abstract: Virtual environment induces simulator sickness effect
for some users. The purpose of this research is to compare the
simulation sickness relative with parallax affect in one-screen and
three-screen HoloStageTM system, measured by Simulation Sickness
Questionnaire (SSQ). The results show the subjects tested in
three-screen has less sickness than one-screen and effect from the
Oculomotor (O) more than from the Disorientation (D) and more than
from the Nausea (N) or represented in O>D>N.
Abstract: Adaptive control involves modifying the control law
used by the controller to cope with the fact that the parameters of the
system being controlled change drastically due to change in
environmental conditions or in system itself. This technique is based
on the fundamental characteristic of adaptation of living organism.
The adaptive control process is one that continuously and
automatically measures the dynamic behavior of plant, compares it
with the desired output and uses the difference to vary adjustable
system parameters or to generate an actuating signal in such a way so
that optimal performance can be maintained regardless of system
changes. This paper deals with application of model reference
adaptive control scheme in first order system. The rule which is used
for this application is MIT rule. This paper also shows the effect of
adaptation gain on the system performance. Simulation is done in
MATLAB and results are discussed in detail.
Abstract: A manufacturing feature can be defined simply as a
geometric shape and its manufacturing information to create the shape.
In a feature-based process planning system, feature library that
consists of pre-defined manufacturing features and the manufacturing
information to create the shape of the features, plays an important role
in the extraction of manufacturing features with their proper
manufacturing information. However, to manage the manufacturing
information flexibly, it is important to build a feature library that can
be easily modified. In this paper, the implementation of Semantic Wiki
for the development of the feature library is proposed.
Abstract: MultiProtocol Label Switching (MPLS) is an
emerging technology that aims to address many of the existing issues
associated with packet forwarding in today-s Internetworking
environment. It provides a method of forwarding packets at a high
rate of speed by combining the speed and performance of Layer 2
with the scalability and IP intelligence of Layer 3. In a traditional IP
(Internet Protocol) routing network, a router analyzes the destination
IP address contained in the packet header. The router independently
determines the next hop for the packet using the destination IP
address and the interior gateway protocol. This process is repeated at
each hop to deliver the packet to its final destination. In contrast, in
the MPLS forwarding paradigm routers on the edge of the network
(label edge routers) attach labels to packets based on the forwarding
Equivalence class (FEC). Packets are then forwarded through the
MPLS domain, based on their associated FECs , through swapping
the labels by routers in the core of the network called label switch
routers. The act of simply swapping the label instead of referencing
the IP header of the packet in the routing table at each hop provides
a more efficient manner of forwarding packets, which in turn allows
the opportunity for traffic to be forwarded at tremendous speeds and
to have granular control over the path taken by a packet. This paper
deals with the process of MPLS forwarding mechanism,
implementation of MPLS datapath , and test results showing the
performance comparison of MPLS and IP routing. The discussion
will focus primarily on MPLS IP packet networks – by far the
most common application of MPLS today.
Abstract: In this paper a modified version NXM of traditional 5X5 playfair cipher is introduced which enable the user to encrypt message of any Natural language by taking appropriate size of the matrix depending upon the size of the natural language. 5X5 matrix has the capability of storing only 26 characters of English language and unable to store characters of any language having more than 26 characters. To overcome this limitation NXM matrix is introduced which solve this limitation. In this paper a special case of Urdu language is discussed. Where # is used for completing odd pair and * is used for repeating letters.
Abstract: This paper introduces a novel approach to estimate the
clique potentials of Gibbs Markov random field (GMRF) models
using the Support Vector Machines (SVM) algorithm and the Mean
Field (MF) theory. The proposed approach is based on modeling the
potential function associated with each clique shape of the GMRF
model as a Gaussian-shaped kernel. In turn, the energy function of
the GMRF will be in the form of a weighted sum of Gaussian
kernels. This formulation of the GMRF model urges the use of the
SVM with the Mean Field theory applied for its learning for
estimating the energy function. The approach has been tested on
synthetic texture images and is shown to provide satisfactory results
in retrieving the synthesizing parameters.
Abstract: Image coding based on clustering provides immediate
access to targeted features of interest in a high quality decoded
image. This approach is useful for intelligent devices, as well as for
multimedia content-based description standards. The result of image
clustering cannot be precise in some positions especially on pixels
with edge information which produce ambiguity among the clusters.
Even with a good enhancement operator based on PDE, the quality of
the decoded image will highly depend on the clustering process. In
this paper, we introduce an ambiguity cluster in image coding to
represent pixels with vagueness properties. The presence of such
cluster allows preserving some details inherent to edges as well for
uncertain pixels. It will also be very useful during the decoding phase
in which an anisotropic diffusion operator, such as Perona-Malik,
enhances the quality of the restored image. This work also offers a
comparative study to demonstrate the effectiveness of a fuzzy
clustering technique in detecting the ambiguity cluster without losing
lot of the essential image information. Several experiments have been
carried out to demonstrate the usefulness of ambiguity concept in
image compression. The coding results and the performance of the
proposed algorithms are discussed in terms of the peak signal-tonoise
ratio and the quantity of ambiguous pixels.
Abstract: This paper presents the results of enhancing images from a left and right stereo pair in order to increase the resolution of a 3D representation of a scene generated from that same pair. A new neural network structure known as a Self Delaying Dynamic Network (SDN) has been used to perform the enhancement. The advantage of SDNs over existing techniques such as bicubic interpolation is their ability to cope with motion and noise effects. SDNs are used to generate two high resolution images, one based on frames taken from the left view of the subject, and one based on the frames from the right. This new high resolution stereo pair is then processed by a disparity map generator. The disparity map generated is compared to two other disparity maps generated from the same scene. The first is a map generated from an original high resolution stereo pair and the second is a map generated using a stereo pair which has been enhanced using bicubic interpolation. The maps generated using the SDN enhanced pairs match more closely the target maps. The addition of extra noise into the input images is less problematic for the SDN system which is still able to out perform bicubic interpolation.
Abstract: We here propose improved version of elastic graph matching (EGM) as a face detector, called the multi-scale EGM (MS-EGM). In this improvement, Gabor wavelet-based pyramid reduces computational complexity for the feature representation often used in the conventional EGM, but preserving a critical amount of information about an image. The MS-EGM gives us higher detection performance than Viola-Jones object detection algorithm of the AdaBoost Haar-like feature cascade. We also show rapid detection speeds of the MS-EGM, comparable to the Viola-Jones method. We find fruitful benefits in the MS-EGM, in terms of topological feature representation for a face.
Abstract: In process control applications, above 90% of the
controllers are of PID type. This paper proposed a robust PI
controller with fractional-order integrator. The PI parameters were
obtained using classical Ziegler-Nichols rules but enhanced with the
application of error filter cascaded to the fractional-order PI. The
controller was applied on steam temperature process that was
described by FOPDT transfer function. The process can be classified
as lag dominating process with very small relative dead-time. The
proposed control scheme was compared with other PI controller
tuned using Ziegler-Nichols and AMIGO rules. Other PI controller
with fractional-order integrator known as F-MIGO was also
considered. All the controllers were subjected to set point change and
load disturbance tests. The performance was measured using Integral
of Squared Error (ISE) and Integral of Control Signal (ICO). The
proposed controller produced best performance for all the tests with
the least ISE index.
Abstract: In this paper, a pipelined version of genetic algorithm,
called PLGA, and a corresponding hardware platform are described.
The basic operations of conventional GA (CGA) are made pipelined
using an appropriate selection scheme. The selection operator, used
here, is stochastic in nature and is called SA-selection. This helps
maintaining the basic generational nature of the proposed pipelined
GA (PLGA). A number of benchmark problems are used to compare
the performances of conventional roulette-wheel selection and the
SA-selection. These include unimodal and multimodal functions with
dimensionality varying from very small to very large. It is seen that
the SA-selection scheme is giving comparable performances with
respect to the classical roulette-wheel selection scheme, for all the
instances, when quality of solutions and rate of convergence are considered.
The speedups obtained by PLGA for different benchmarks
are found to be significant. It is shown that a complete hardware
pipeline can be developed using the proposed scheme, if parallel
evaluation of the fitness expression is possible. In this connection
a low-cost but very fast hardware evaluation unit is described.
Results of simulation experiments show that in a pipelined hardware
environment, PLGA will be much faster than CGA. In terms of
efficiency, PLGA is found to outperform parallel GA (PGA) also.
Abstract: The Chinese Postman Problem (CPP) is one of the
classical problems in graph theory and is applicable in a wide range
of fields. With the rapid development of hybrid systems and model
based testing, Chinese Postman Problem with Time Dependent Travel
Times (CPPTDT) becomes more realistic than the classical problems.
In the literature, we have proposed the first integer programming
formulation for the CPPTDT problem, namely, circuit formulation,
based on which some polyhedral results are investigated and a cutting
plane algorithm is also designed. However, there exists a main drawback:
the circuit formulation is only available for solving the special
instances with all circuits passing through the origin. Therefore, this
paper proposes a new integer programming formulation for solving
all the general instances of CPPTDT. Moreover, the size of the circuit
formulation is too large, which is reduced dramatically here. Thus, it
is possible to design more efficient algorithm for solving the CPPTDT
in the future research.
Abstract: As the Computed Tomography(CT) requires normally
hundreds of projections to reconstruct the image, patients are exposed
to more X-ray energy, which may cause side effects such as cancer.
Even when the variability of the particles in the object is very less,
Computed Tomography requires many projections for good quality
reconstruction. In this paper, less variability of the particles in an
object has been exploited to obtain good quality reconstruction.
Though the reconstructed image and the original image have same
projections, in general, they need not be the same. In addition
to projections, if a priori information about the image is known,
it is possible to obtain good quality reconstructed image. In this
paper, it has been shown by experimental results why conventional
algorithms fail to reconstruct from a few projections, and an efficient
polynomial time algorithm has been given to reconstruct a bi-level
image from its projections along row and column, and a known sub
image of unknown image with smoothness constraints by reducing the
reconstruction problem to integral max flow problem. This paper also
discusses the necessary and sufficient conditions for uniqueness and
extension of 2D-bi-level image reconstruction to 3D-bi-level image
reconstruction.
Abstract: Discrete particle swarm optimization (DPSO) is a
powerful stochastic evolutionary algorithm that is used to solve the
large-scale, discrete and nonlinear optimization problems. However,
it has been observed that standard DPSO algorithm has premature
convergence when solving a complex optimization problem like
transmission expansion planning (TEP). To resolve this problem an
advanced discrete particle swarm optimization (ADPSO) is proposed
in this paper. The simulation result shows that optimization of lines
loading in transmission expansion planning with ADPSO is better
than DPSO from precision view point.
Abstract: In this paper we present a way of controlling the
concurrent access to data in a distributed application using the
Pessimistic Offline Lock design pattern. In our case, the application
processes a complex entity, which contains in a hierarchical structure
different other entities (objects). It will be shown how the complex
entity and the contained entities must be locked in order to control
the concurrent access to data.
Abstract: This paper describes the design and results of FROID,
an outbound intrusion detection system built with agent technology
and supported by an attacker-centric ontology. The prototype
features a misuse-based detection mechanism that identifies remote
attack tools in execution. Misuse signatures composed of attributes
selected through entropy analysis of outgoing traffic streams and
process runtime data are derived from execution variants of attack
programs. The core of the architecture is a mesh of self-contained
detection cells organized non-hierarchically that group agents in a
functional fashion. The experiments show performance gains when
the ontology is enabled as well as an increase in accuracy achieved
when correlation cells combine detection evidence received from
independent detection cells.
Abstract: Digital watermarking is the process of embedding
information into a digital signal which can be used in DRM (digital
rights managements) system. The visible watermark (often called logo)
can indicate the owner of the copyright which can often be seen in the
TV program and protects the copyright in an active way. However,
most of the schemes do not consider the visible watermark removing
process. To solve this problem, a visible watermarking scheme with
embedding and removing process is proposed under the control of a
secure template. The template generates different version of
watermarks which can be seen visually the same for different users.
Users with the right key can completely remove the watermark and
recover the original image while the unauthorized user is prevented to
remove the watermark. Experiment results show that our
watermarking algorithm obtains a good visual quality and is hard to be
removed by the illegally users. Additionally, the authorized users can
completely remove the visible watermark and recover the original
image with a good quality.