Abstract: Robust stability and performance are the two most
basic features of feedback control systems. The harmonic balance
analysis technique enables to analyze the stability of limit cycles
arising from a neural network control based system operating over
nonlinear plants. In this work a robust stability analysis based on the
harmonic balance is presented and applied to a neural based control
of a non-linear binary distillation column with unstructured
uncertainty. We develop ways to describe uncertainty in the form of
neglected nonlinear dynamics and high harmonics for the plant and
controller respectively. Finally, conclusions about the performance of
the neural control system are discussed using the Nyquist stability
margin together with the structured singular values of the uncertainty
as a robustness measure.
Abstract: This paper discusses a new heavy tailed distribution based data hiding into discrete cosine transform (DCT) coefficients of image, which provides statistical security as well as robustness against steganalysis attacks. Unlike other data hiding algorithms, the proposed technique does not introduce much effect in the stegoimage-s DCT coefficient probability plots, thus making the presence of hidden data statistically undetectable. In addition the proposed method does not compromise on hiding capacity. When compared to the generic block DCT based data-hiding scheme, our method found more robust against a variety of image manipulating attacks such as filtering, blurring, JPEG compression etc.
Abstract: In this work, a special case of the image superresolution
problem where the only type of motion is global
translational motion and the blurs are shift-invariant is investigated.
The necessary conditions for exact reconstruction of the original
image by using finite impulse-response reconstruction filters are
developed. Given that the conditions are satisfied, a method for exact
super-resolution is presented and some simulation results are shown.
Abstract: For improving the efficiency of human 3D tracking, we
present an algorithm to track 3D Arm Motion. First, the Hierarchy
Limb Model (HLM) is proposed based on the human 3D skeleton
model. Second, via graph decomposition, the arm motion state space,
modeled by HLM, can be discomposed into two low dimension
subspaces: root nodes and leaf nodes. Finally, Rao-Blackwellised
Particle Filter is used to estimate the 3D arm motion. The result of
experiment shows that our algorithm can advance the computation
efficiency.
Abstract: In this paper a new method is suggested for risk
management by the numerical patterns in data-mining. These patterns
are designed using probability rules in decision trees and are cared to
be valid, novel, useful and understandable. Considering a set of
functions, the system reaches to a good pattern or better objectives.
The patterns are analyzed through the produced matrices and some
results are pointed out. By using the suggested method the direction
of the functionality route in the systems can be controlled and best
planning for special objectives be done.
Abstract: An application framework provides a reusable design
and implementation for a family of software systems. Application
developers extend the framework to build their particular
applications using hooks. Hooks are the places identified to show
how to use and customize the framework. Hooks define Framework
Interface Classes (FICs) and their possible specifications, which
helps in building reusable test cases for the implementations of these
classes. In applications developed using gray-box frameworks, FICs
inherit framework classes or use them without inheritance. In this
paper, a test-case generation technique is extended to build test cases
for FICs built for gray-box frameworks. A tool is developed to
automate the introduced technique.
Abstract: AAM (active appearance model) has been successfully
applied to face and facial feature localization. However, its performance is sensitive to initial parameter values. In this paper, we propose a two-stage AAM for robust face alignment, which first fits an
inner face-AAM model to the inner facial feature points of the face and then localizes the whole face and facial features by optimizing the
whole face-AAM model parameters. Experiments show that the proposed face alignment method using two-stage AAM is more reliable to the background and the head pose than the standard
AAM-based face alignment method.
Abstract: In this paper, a new face recognition method based on
PCA (principal Component Analysis), LDA (Linear Discriminant
Analysis) and neural networks is proposed. This method consists of
four steps: i) Preprocessing, ii) Dimension reduction using PCA, iii)
feature extraction using LDA and iv) classification using neural
network. Combination of PCA and LDA is used for improving the
capability of LDA when a few samples of images are available and
neural classifier is used to reduce number misclassification caused by
not-linearly separable classes. The proposed method was tested on
Yale face database. Experimental results on this database
demonstrated the effectiveness of the proposed method for face
recognition with less misclassification in comparison with previous
methods.
Abstract: DNA microarray technology is widely used by
geneticists to diagnose or treat diseases through gene expression.
This technology is based on the hybridization of a tissue-s DNA
sequence into a substrate and the further analysis of the image
formed by the thousands of genes in the DNA as green, red or yellow
spots. The process of DNA microarray image analysis involves
finding the location of the spots and the quantification of the
expression level of these. In this paper, a tool to perform DNA
microarray image analysis is presented, including a spot addressing
method based on the image projections, the spot segmentation
through contour based segmentation and the extraction of relevant
information due to gene expression.
Abstract: The purposes of this paper are to (1) promote
excellence in computer science by suggesting a cohesive innovative
approach to fill well documented deficiencies in current computer
science education, (2) justify (using the authors- and others anecdotal
evidence from both the classroom and the real world) why this
approach holds great potential to successfully eliminate the
deficiencies, (3) invite other professionals to join the authors in proof
of concept research. The authors- experiences, though anecdotal,
strongly suggest that a new approach involving visual modeling
technologies should allow computer science programs to retain a
greater percentage of prospective and declared majors as students
become more engaged learners, more successful problem-solvers,
and better prepared as programmers. In addition, the graduates of
such computer science programs will make greater contributions to
the profession as skilled problem-solvers. Instead of wearily
rememorizing code as they move to the next course, students will
have the problem-solving skills to think and work in more
sophisticated and creative ways.
Abstract: In many applications there is a broad variety of
information relevant to a focal “object" of interest, and the fusion of such heterogeneous data types is desirable for classification and
categorization. While these various data types can sometimes be treated as orthogonal (such as the hull number, superstructure color,
and speed of an oil tanker), there are instances where the inference and the correlation between quantities can provide improved fusion
capabilities (such as the height, weight, and gender of a person). A
service-oriented architecture has been designed and prototyped to
support the fusion of information for such “object-centric" situations.
It is modular, scalable, and flexible, and designed to support new data sources, fusion algorithms, and computational resources without affecting existing services. The architecture is designed to simplify
the incorporation of legacy systems, support exact and probabilistic entity disambiguation, recognize and utilize multiple types of
uncertainties, and minimize network bandwidth requirements.
Abstract: Phase locked loops in 10 Gb/s and faster data links are
low phase noise devices. Characterization of their phase jitter
transfer functions is difficult because the intrinsic noise of the PLLs
is comparable to the phase noise of the reference clock signal. The
problem is solved by using a linear model to account for the intrinsic
noise. This study also introduces a novel technique for measuring the
transfer function. It involves the use of the reference clock as a
source of wideband excitation, in contrast to the commonly used
sinusoidal excitations at discrete frequencies. The data reported here
include the intrinsic noise of a PLL for 10 Gb/s links and the jitter
transfer function of a PLL for 12.8 Gb/s links. The measured transfer
function suggests that the PLL responded like a second order linear
system to a low noise reference clock.
Abstract: Fault tree analysis is a well-known method for
reliability and safety assessment of engineering systems. In the last 3
decades, a number of methods have been introduced, in the literature,
for automatic construction of fault trees. The main difference between these methods is the starting model from which the tree is constructed. This paper presents a new methodology for the construction of static and dynamic fault trees from a system Simulink
model. The method is introduced and explained in detail, and its correctness and completeness is experimentally validated by using an example, taken from literature. Advantages of the method are also mentioned.
Abstract: This paper presents a novel method that allows an
agent host to delegate its signing power to an anonymous mobile
agent in such away that the mobile agent does not reveal any information about its host-s identity and, at the same time, can be authenticated by the service host, hence, ensuring fairness of service
provision. The solution introduces a verification server to verify the
signature generated by the mobile agent in such a way that even if colluding with the service host, both parties will not get more information than what they already have. The solution incorporates
three methods: Agent Signature Key Generation method, Agent
Signature Generation method, Agent Signature Verification method.
The most notable feature of the solution is that, in addition to allowing secure and anonymous signature delegation, it enables
tracking of malicious mobile agents when a service host is attacked. The security properties of the proposed solution are analyzed, and the solution is compared with the most related work.
Abstract: Animated graph gives some good impressions in
presenting information. However, not many people are able to produce it because the process of generating an animated graph requires some technical skills. This work presents Content
Management System with Animated Graph (CMS-AG). It is a webbased system enabling users to produce an effective and interactive
graphical report in a short time period. It allows for three levels of user authentication, provides update profile, account management, template management, graph management, and track changes. The system development applies incremental development approach, object-oriented concepts and Web programming technologies. The design architecture promotes new technology of reporting. It also helps user cut off unnecessary expenses, save time and learn new things on different levels of users. In this paper, the developed system is described.
Abstract: The convergence of heterogeneous wireless access technologies characterizes the 4G wireless networks. In such converged systems, the seamless and efficient handoff between
different access technologies (vertical handoff) is essential and remains a challenging problem. The heterogeneous co-existence of access technologies with largely different characteristics creates a decision problem of determining the “best" available network at
“best" time to reduce the unnecessary handoffs. This paper proposes a dynamic decision model to decide the “best" network at “best"
time moment to handoffs. The proposed dynamic decision model make the right vertical handoff decisions by determining the “best"
network at “best" time among available networks based on, dynamic
factors such as “Received Signal Strength(RSS)" of network and
“velocity" of mobile station simultaneously with static factors like Usage Expense, Link capacity(offered bandwidth) and power
consumption. This model not only meets the individual user needs but also improve the whole system performance by reducing the unnecessary handoffs.
Abstract: A new method, based on the normal shrink and
modified version of Katssagelous and Lay, is proposed for multiscale
blind image restoration. The method deals with the noise and blur in
the images. It is shown that the normal shrink gives the highest S/N
(signal to noise ratio) for image denoising process. The multiscale
blind image restoration is divided in two sections. The first part of
this paper proposes normal shrink for image denoising and the
second part of paper proposes modified version of katssagelous and
Lay for blur estimation and the combination of both methods to reach
a multiscale blind image restoration.
Abstract: The overall objective of this paper is to retrieve soil
surfaces parameters namely, roughness and soil moisture related to
the dielectric constant by inverting the radar backscattered signal
from natural soil surfaces.
Because the classical description of roughness using statistical
parameters like the correlation length doesn't lead to satisfactory
results to predict radar backscattering, we used a multi-scale
roughness description using the wavelet transform and the Mallat
algorithm. In this description, the surface is considered as a
superposition of a finite number of one-dimensional Gaussian
processes each having a spatial scale. A second step in this study
consisted in adapting a direct model simulating radar backscattering
namely the small perturbation model to this multi-scale surface
description. We investigated the impact of this description on radar
backscattering through a sensitivity analysis of backscattering
coefficient to the multi-scale roughness parameters.
To perform the inversion of the small perturbation multi-scale
scattering model (MLS SPM) we used a multi-layer neural network
architecture trained by backpropagation learning rule. The inversion
leads to satisfactory results with a relative uncertainty of 8%.
Abstract: Random and natural textures classification is still
one of the biggest challenges in the field of image processing and
pattern recognition. In this paper, texture feature extraction using
Slant Hadamard Transform was studied and compared to other
signal processing-based texture classification schemes. A
parametric SHT was also introduced and employed for natural
textures feature extraction. We showed that a subtly modified
parametric SHT can outperform ordinary Walsh-Hadamard
transform and discrete cosine transform. Experiments were carried
out on a subset of Vistex random natural texture images using a
kNN classifier.
Abstract: Number of documents being created increases at an
increasing pace while most of them being in already known topics
and little of them introducing new concepts. This fact has started a
new era in information retrieval discipline where the requirements
have their own specialties. That is digging into topics and concepts
and finding out subtopics or relations between topics. Up to now IR
researches were interested in retrieving documents about a general
topic or clustering documents under generic subjects. However these
conventional approaches can-t go deep into content of documents
which makes it difficult for people to reach to right documents they
were searching. So we need new ways of mining document sets
where the critic point is to know much about the contents of the
documents. As a solution we are proposing to enhance LSI, one of
the proven IR techniques by supporting its vector space with n-gram
forms of words. Positive results we have obtained are shown in two
different application area of IR domain; querying a document
database, clustering documents in the document database.