Abstract: This paper describes a novel projection algorithm, the Projection Onto Span Algorithm (POSA) for wavelet-based superresolution and removing speckle (in wavelet domain) of unknown variance from Synthetic Aperture Radar (SAR) images. Although the POSA is good as a new superresolution algorithm for image enhancement, image metrology and biometric identification, here one will use it like a tool of despeckling, being the first time that an algorithm of super-resolution is used for despeckling of SAR images. Specifically, the speckled SAR image is decomposed into wavelet subbands; POSA is applied to the high subbands, and reconstruct a SAR image from the modified detail coefficients. Experimental results demonstrate that the new method compares favorably to several other despeckling methods on test SAR images.
Abstract: Wavelet transform provides several important
characteristics which can be used in a texture analysis and
classification. In this work, an efficient texture classification method,
which combines concepts from wavelet and co-occurrence matrices,
is presented. An Euclidian distance classifier is used to evaluate the
various methods of classification. A comparative study is essential to
determine the ideal method. Using this conjecture, we developed a
novel feature set for texture classification and demonstrate its
effectiveness
Abstract: In this research, we propose a weighted class based
queuing (WCBQ) mechanism to provide class differentiation and to
reduce the load for the IMS (IP Multimedia Subsystem) presence
server (PS). The tasks of admission controller for the PS are
demonstrated. Analysis and simulation models are developed to
quantify the performance of WCBQ scheme. An optimized dropping
time frame has been developed based on which some of the preexisting
messages are dropped from the PS-buffer. Cost functions are
developed and simulation comparison has been performed with FCFS
(First Come First Served) scheme. The results show that the PS
benefits significantly from the proposed queuing and dropping
algorithm (WCBQ) during heavy traffic.
Abstract: In spite of all advancement in software testing,
debugging remains a labor-intensive, manual, time consuming, and
error prone process. A candidate solution to enhance debugging
process is to fuse it with testing process. To achieve this integration,
a possible solution may be categorizing common software tests and
errors followed by the effort on fixing the errors through general
solutions for each test/error pair. Our approach to address this issue is
based on Christopher Alexander-s pattern and pattern language
concepts. The patterns in this language are grouped into three major
sections and connect the three concepts of test, error, and debug.
These patterns and their hierarchical relationship shape a pattern
language that introduces a solution to solve software errors in a
known testing context.
Finally, we will introduce our developed framework ADE as a
sample implementation to support a pattern of proposed language,
which aims to automate the whole process of evolving software
design via evolutionary methods.
Abstract: Manufacturing processes demand tight dimensional
tolerances. The paper concerns a transducer for precise measurement
of displacement, based on a camera containing a linescan chip.
When tests were conducted using a track of black and white stripes
with a 2mm pitch, errors in measuring on individual cycle amounted
to 1.75%, suggesting that a precision of 35 microns is achievable.
Abstract: A Web-based learning tool, the Learn IN Context
(LINC) system, designed and being used in some institution-s
courses in mixed-mode learning, is presented in this paper. This
mode combines face-to-face and distance approaches to education.
LINC can achieve both collaborative and competitive learning. In
order to provide both learners and tutors with a more natural way to
interact with e-learning applications, a conversational interface has
been included in LINC. Hence, the components and essential features
of LINC+, the voice enhanced version of LINC, are described. We
report evaluation experiments of LINC/LINC+ in a real use context
of a computer programming course taught at the Université de
Moncton (Canada). The findings show that when the learning
material is delivered in the form of a collaborative and voice-enabled
presentation, the majority of learners seem to be satisfied with this
new media, and confirm that it does not negatively affect their
cognitive load.
Abstract: This paper presents a new sufficient condition for the
existence, uniqueness and global asymptotic stability of the equilibrium point for Cohen-Grossberg neural networks with multiple time delays. The results establish a relationship between the network parameters
of the neural system independently of the delay parameters. The results are also compared with the previously reported results in
the literature.
Abstract: This paper introduces a technique of distortion
estimation in image watermarking using Genetic Programming (GP).
The distortion is estimated by considering the problem of obtaining a
distorted watermarked signal from the original watermarked signal as
a function regression problem. This function regression problem is
solved using GP, where the original watermarked signal is
considered as an independent variable. GP-based distortion
estimation scheme is checked for Gaussian attack and Jpeg
compression attack. We have used Gaussian attacks of different
strengths by changing the standard deviation. JPEG compression
attack is also varied by adding various distortions. Experimental
results demonstrate that the proposed technique is able to detect the
watermark even in the case of strong distortions and is more robust
against attacks.
Abstract: Advancement in Artificial Intelligence has lead to the
developments of various “smart" devices. Character recognition
device is one of such smart devices that acquire partial human
intelligence with the ability to capture and recognize various
characters in different languages. Firstly multiscale neural training
with modifications in the input training vectors is adopted in this
paper to acquire its advantage in training higher resolution character
images. Secondly selective thresholding using minimum distance
technique is proposed to be used to increase the level of accuracy of
character recognition. A simulator program (a GUI) is designed in
such a way that the characters can be located on any spot on the
blank paper in which the characters are written. The results show that
such methods with moderate level of training epochs can produce
accuracies of at least 85% and more for handwritten upper case
English characters and numerals.
Abstract: XML files contain data which is in well formatted manner. By studying the format or semantics of the grammar it will be helpful for fast retrieval of the data. There are many algorithms which describes about searching the data from XML files. There are no. of approaches which uses data structure or are related to the contents of the document. In these cases user must know about the structure of the document and information retrieval techniques using NLPs is related to content of the document. Hence the result may be irrelevant or not so successful and may take more time to search.. This paper presents fast XML retrieval techniques by using new indexing technique and the concept of RXML. When indexing an XML document, the system takes into account both the document content and the document structure and assigns the value to each tag from file. To query the system, a user is not constrained about fixed format of query.
Abstract: In this paper we present a technique to speed up
ICA based on the idea of reducing the dimensionality of the data
set preserving the quality of the results. In particular we refer to
FastICA algorithm which uses the Kurtosis as statistical property
to be maximized. By performing a particular Johnson-Lindenstrauss
like projection of the data set, we find the minimum dimensionality
reduction rate ¤ü, defined as the ratio between the size k of the reduced
space and the original one d, which guarantees a narrow confidence
interval of such estimator with high confidence level. The derived
dimensionality reduction rate depends on a system control parameter
β easily computed a priori on the basis of the observations only.
Extensive simulations have been done on different sets of real world
signals. They show that actually the dimensionality reduction is very
high, it preserves the quality of the decomposition and impressively
speeds up FastICA. On the other hand, a set of signals, on which the
estimated reduction rate is greater than 1, exhibits bad decomposition
results if reduced, thus validating the reliability of the parameter β.
We are confident that our method will lead to a better approach to
real time applications.
Abstract: In this paper we present a new method for over-height
vehicle detection in low headroom streets and highways using digital
video possessing. The accuracy and the lower price comparing to
present detectors like laser radars and the capability of providing
extra information like speed and height measurement make this
method more reliable and efficient. In this algorithm the features are
selected and tracked using KLT algorithm. A blob extraction
algorithm is also applied using background estimation and
subtraction. Then the world coordinates of features that are inside the
blobs are estimated using a noble calibration method. As, the heights
of the features are calculated, we apply a threshold to select overheight
features and eliminate others. The over-height features are
segmented using some association criteria and grouped using an
undirected graph. Then they are tracked through sequential frames.
The obtained groups refer to over-height vehicles in a scene.
Abstract: Evolutionary Programming (EP) represents a
methodology of Evolutionary Algorithms (EA) in which mutation is
considered as a main reproduction operator. This paper presents a
novel EP approach for Artificial Neural Networks (ANN) learning.
The proposed strategy consists of two components: the self-adaptive,
which contains phenotype information and the dynamic, which is
described by genotype. Self-adaptation is achieved by the addition of
a value, called the network weight, which depends on a total number
of hidden layers and an average number of neurons in hidden layers.
The dynamic component changes its value depending on the fitness
of a chromosome, exposed to mutation. Thus, the mutation step size
is controlled by two components, encapsulated in the algorithm,
which adjust it according to the characteristics of a predefined ANN
architecture and the fitness of a particular chromosome. The
comparative analysis of the proposed approach and the classical EP
(Gaussian mutation) showed, that that the significant acceleration of
the evolution process is achieved by using both phenotype and
genotype information in the mutation strategy.
Abstract: This paper proposes a low power SRAM based on
five transistor SRAM cell. Proposed SRAM uses novel word-line
decoding such that, during read/write operation, only selected cell
connected to bit-line whereas, in conventional SRAM (CV-SRAM),
all cells in selected row connected to their bit-lines, which in turn
develops differential voltages across all bit-lines, and this makes
energy consumption on unselected bit-lines. In proposed SRAM
memory array divided into two halves and this causes data-line
capacitance to reduce. Also proposed SRAM uses one bit-line and
thus has lower bit-line leakage compared to CV-SRAM.
Furthermore, the proposed SRAM incurs no area overhead, and has
comparable read/write performance versus the CV-SRAM.
Simulation results in standard 0.25μm CMOS technology shows in
worst case proposed SRAM has 80% smaller dynamic energy
consumption in each cycle compared to CV-SRAM. Besides, energy
consumption in each cycle of proposed SRAM and CV-SRAM
investigated analytically, the results of which are in good agreement
with the simulation results.
Abstract: Recently, lots of researchers are attracted to retrieving
multimedia database by using some impression words and their values.
Ikezoe-s research is one of the representatives and uses eight pairs of
opposite impression words. We had modified its retrieval interface and
proposed '2D-RIB'. In '2D-RIB', after a retrieval person selects a
single basic music, the system visually shows some other music
around the basic one along relative position. He/she can select one of
them fitting to his/her intention, as a retrieval result. The purpose of
this paper is to improve his/her satisfaction level to the retrieval result
in 2D-RIB. One of our extensions is to define and introduce the
following two measures: 'melody goodness' and 'general acceptance'.
We implement them in different five combinations. According to an
evaluation experiment, both of these two measures can contribute to
the improvement. Another extension is three types of customization.
We have implemented them and clarified which customization is
effective.
Abstract: This article outlines conceptualization and
implementation of an intelligent system capable of extracting
knowledge from databases. Use of hybridized features of both the
Rough and Fuzzy Set theory render the developed system flexibility
in dealing with discreet as well as continuous datasets. A raw data set
provided to the system, is initially transformed in a computer legible
format followed by pruning of the data set. The refined data set is
then processed through various Rough Set operators which enable
discovery of parameter relationships and interdependencies. The
discovered knowledge is automatically transformed into a rule base
expressed in Fuzzy terms. Two exemplary cancer repository datasets
(for Breast and Lung Cancer) have been used to test and implement
the proposed framework.
Abstract: Parsing is important in Linguistics and Natural
Language Processing to understand the syntax and semantics of a
natural language grammar. Parsing natural language text is
challenging because of the problems like ambiguity and inefficiency.
Also the interpretation of natural language text depends on context
based techniques. A probabilistic component is essential to resolve
ambiguity in both syntax and semantics thereby increasing accuracy
and efficiency of the parser. Tamil language has some inherent
features which are more challenging. In order to obtain the solutions,
lexicalized and statistical approach is to be applied in the parsing
with the aid of a language model. Statistical models mainly focus on
semantics of the language which are suitable for large vocabulary
tasks where as structural methods focus on syntax which models
small vocabulary tasks. A statistical language model based on Trigram
for Tamil language with medium vocabulary of 5000 words has
been built. Though statistical parsing gives better performance
through tri-gram probabilities and large vocabulary size, it has some
disadvantages like focus on semantics rather than syntax, lack of
support in free ordering of words and long term relationship. To
overcome the disadvantages a structural component is to be
incorporated in statistical language models which leads to the
implementation of hybrid language models. This paper has attempted
to build phrase structured hybrid language model which resolves
above mentioned disadvantages. In the development of hybrid
language model, new part of speech tag set for Tamil language has
been developed with more than 500 tags which have the wider
coverage. A phrase structured Treebank has been developed with 326
Tamil sentences which covers more than 5000 words. A hybrid
language model has been trained with the phrase structured Treebank
using immediate head parsing technique. Lexicalized and statistical
parser which employs this hybrid language model and immediate
head parsing technique gives better results than pure grammar and
trigram based model.
Abstract: The Integrated Performance Modelling Environment
(IPME) is a powerful simulation engine for task simulation and
performance analysis. However, it has no high level cognition such
as memory and reasoning for complex simulation. This article
introduces a knowledge representation and reasoning scheme that can
accommodate uncertainty in simulations of military personnel with
IPME. This approach demonstrates how advanced reasoning models
that support similarity-based associative process, rule-based abstract
process, multiple reasoning methods and real-time interaction can be
integrated with conventional task network modelling to provide
greater functionality and flexibility when modelling operator
performance.
Abstract: Extensive use of the Internet coupled with the
marvelous growth in e-commerce and m-commerce has created a
huge demand for information security. The Secure Socket Layer
(SSL) protocol is the most widely used security protocol in the
Internet which meets this demand. It provides protection against
eaves droppings, tampering and forgery. The cryptographic
algorithms RC4 and HMAC have been in use for achieving security
services like confidentiality and authentication in the SSL. But recent
attacks against RC4 and HMAC have raised questions in the
confidence on these algorithms. Hence two novel cryptographic
algorithms MAJE4 and MACJER-320 have been proposed as
substitutes for them. The focus of this work is to demonstrate the
performance of these new algorithms and suggest them as dependable
alternatives to satisfy the need of security services in SSL. The
performance evaluation has been done by using practical
implementation method.