Abstract: Negation is useful in the majority of the real world applications. However, its introduction leads to semantic and canonical problems. SEPN nets are well adapted extension of predicate nets for the definition and manipulation of stratified programs. This formalism is characterized by two main contributions. The first concerns the management of the whole class of stratified programs. The second contribution is related to usual operations optimization (maximal stratification, incremental updates ...). We propose, in this paper, useful algorithms for manipulating stratified programs using SEPN. These algorithms were implemented and validated with STRPRO tool.
Abstract: In this paper, a novel multi join algorithm to join
multiple relations will be introduced. The novel algorithm is based
on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But
instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join
buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This
will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity
required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m
for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join
indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.
Abstract: with increasing circuits- complexity and demand to
use portable devices, power consumption is one of the most
important parameters these days. Full adders are the basic block of
many circuits. Therefore reducing power consumption in full adders
is very important in low power circuits. One of the most powerconsuming
modules in full adders is XOR/XNOR circuit. This paper
presents two new full adders based on two new logic approaches. The
proposed logic approaches use one XOR or XNOR gate to implement
a full adder cell. Therefore, delay and power will be decreased. Using
two new approaches and two XOR and XNOR gates, two new full
adders have been implemented in this paper. Simulations are carried
out by HSPICE in 0.18μm bulk technology with 1.8V supply voltage.
The results show that the ten-transistors proposed full adder has 12%
less power consumption and is 5% faster in comparison to MB12T
full adder. 9T is more efficient in area and is 24% better than similar
10T full adder in term of power consumption. The main drawback of
the proposed circuits is output threshold loss problem.
Abstract: This paper presents a new feature based dense stereo
matching algorithm to obtain the dense disparity map via dynamic
programming. After extraction of some proper features, we use some
matching constraints such as epipolar line, disparity limit, ordering
and limit of directional derivative of disparity as well. Also, a coarseto-
fine multiresolution strategy is used to decrease the search space
and therefore increase the accuracy and processing speed. The
proposed method links the detected feature points into the chains and
compares some of the feature points from different chains, to
increase the matching speed. We also employ color stereo matching
to increase the accuracy of the algorithm. Then after feature
matching, we use the dynamic programming to obtain the dense
disparity map. It differs from the classical DP methods in the stereo
vision, since it employs sparse disparity map obtained from the
feature based matching stage. The DP is also performed further on a
scan line, between any matched two feature points on that scan line.
Thus our algorithm is truly an optimization method. Our algorithm
offers a good trade off in terms of accuracy and computational
efficiency. Regarding the results of our experiments, the proposed
algorithm increases the accuracy from 20 to 70%, and reduces the
running time of the algorithm almost 70%.
Abstract: Electrical distribution systems are incurring large losses as the loads are wide spread, inadequate reactive power compensation facilities and their improper control. A comprehensive static VAR compensator consisting of capacitor bank in five binary sequential steps in conjunction with a thyristor controlled reactor of smallest step size is employed in the investigative work. The work deals with the performance evaluation through analytical studies and practical implementation on an existing system. A fast acting error adaptive controller is developed suitable both for contactor and thyristor switched capacitors. The switching operations achieved are transient free, practically no need to provide inrush current limiting reactors, TCR size minimum providing small percentages of nontriplen harmonics, facilitates stepless variation of reactive power depending on load requirement so as maintain power factor near unity always. It is elegant, closed loop microcontroller system having the features of self regulation in adaptive mode for automatic adjustment. It is successfully tested on a distribution transformer of three phase 50 Hz, Dy11, 11KV/440V, 125 KVA capacity and the functional feasibility and technical soundness are established. The controller developed is new, adaptable to both LT & HT systems and practically established to be giving reliable performance.
Abstract: Elliptic curve-based certificateless signature is slowly
gaining attention due to its ability to retain the efficiency of
identity-based signature to eliminate the need of certificate
management while it does not suffer from inherent private
key escrow problem. Generally, cryptosystem based on elliptic
curve offers equivalent security strength at smaller key sizes
compared to conventional cryptosystem such as RSA which
results in faster computations and efficient use of computing
power, bandwidth, and storage. This paper proposes to implement
certificateless signature based on bilinear pairing to
structure the framework of IKE authentication. In this paper,
we perform a comparative analysis of certificateless signature
scheme with a well-known RSA scheme and also present the
experimental results in the context of signing and verification
execution times. By generalizing our observations, we discuss the
different trade-offs involved in implementing IKE authentication
by using certificateless signature.
Abstract: For over a decade, the Pulse Coupled Neural Network
(PCNN) based algorithms have been successfully used in image
interpretation applications including image segmentation. There are
several versions of the PCNN based image segmentation methods,
and the segmentation accuracy of all of them is very sensitive to the
values of the network parameters. Most methods treat PCNN
parameters like linking coefficient and primary firing threshold as
global parameters, and determine them by trial-and-error. The
automatic determination of appropriate values for linking coefficient,
and primary firing threshold is a challenging problem and deserves
further research. This paper presents a method for obtaining global as
well as local values for the linking coefficient and the primary firing
threshold for neurons directly from the image statistics. Extensive
simulation results show that the proposed approach achieves
excellent segmentation accuracy comparable to the best accuracy
obtainable by trial-and-error for a variety of images.
Abstract: Since the conception of JML, many tools, applications and implementations have been done. In this context, the users or developers who want to use JML seem surounded by many of these tools, applications and so on. Looking for a common infrastructure and an independent language to provide a bridge between these tools and JML, we developed an approach to embedded contracts in XML for Java: XJML. This approach offer us the ability to separate preconditions, posconditions and class invariants using JML and XML, so we made a front-end which can process Runtime Assertion Checking, Extended Static Checking and Full Static Program Verification. Besides, the capabilities for this front-end can be extended and easily implemented thanks to XML. We believe that XJML is an easy way to start the building of a Graphic User Interface delivering in this way a friendly and IDE independency to developers community wich want to work with JML.
Abstract: In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.
Abstract: This paper proposes a new technique for improving
the efficiency of software testing, which is based on a conventional
attempt to reduce test cases that have to be tested for any given
software. The approach utilizes the advantage of Regression Testing
where fewer test cases would lessen time consumption of the testing
as a whole. The technique also offers a means to perform test case
generation automatically. Compared to one of the techniques in the
literature where the tester has no option but to perform the test case
generation manually, the proposed technique provides a better
option. As for the test cases reduction, the technique uses simple
algebraic conditions to assign fixed values to variables (Maximum,
minimum and constant variables). By doing this, the variables values
would be limited within a definite range, resulting in fewer numbers
of possible test cases to process. The technique can also be used in
program loops and arrays.
Abstract: This work presents a neural network model for the
clustering analysis of data based on Self Organizing Maps (SOM).
The model evolves during the training stage towards a hierarchical
structure according to the input requirements. The hierarchical structure
symbolizes a specialization tool that provides refinements of the
classification process. The structure behaves like a single map with
different resolutions depending on the region to analyze. The benefits
and performance of the algorithm are discussed in application to the
Iris dataset, a classical example for pattern recognition.
Abstract: This article concerns the presentation of an integrated
method for detection of steganographic content embedded by new
unknown programs. The method is based on data mining and
aggregated hypothesis testing. The article contains the theoretical
basics used to deploy the proposed detection system and the
description of improvement proposed for the basic system idea.
Further main results of experiments and implementation details are
collected and described. Finally example results of the tests are
presented.
Abstract: Fixed-point simulation results are used for the performance measure of inverting matrices using a reconfigurable processing element. Matrices are inverted using the Cholesky decomposition algorithm. The reconfigurable processing element is capable of all required mathematical operations. The fixed-point word length analysis is based on simulations of different condition numbers and different matrix sizes.
Abstract: Skin color is an important visual cue for computer
vision systems involving human users. In this paper we combine skin
color and optical flow for detection and tracking of skin regions. We
apply these techniques to gesture recognition with encouraging
results. We propose a novel skin similarity measure. For grouping
detected skin regions we propose a novel skin region grouping
mechanism. The proposed techniques work with any number of skin
regions making them suitable for a multiuser scenario.
Abstract: In this paper, an image adaptive, invisible digital
watermarking algorithm with Orthogonal Polynomials based
Transformation (OPT) is proposed, for copyright protection of digital
images. The proposed algorithm utilizes a visual model to determine
the watermarking strength necessary to invisibly embed the
watermark in the mid frequency AC coefficients of the cover image,
chosen with a secret key. The visual model is designed to generate a
Just Noticeable Distortion mask (JND) by analyzing the low level
image characteristics such as textures, edges and luminance of the
cover image in the orthogonal polynomials based transformation
domain. Since the secret key is required for both embedding and
extraction of watermark, it is not possible for an unauthorized user to
extract the embedded watermark. The proposed scheme is robust to
common image processing distortions like filtering, JPEG
compression and additive noise. Experimental results show that the
quality of OPT domain watermarked images is better than its DCT
counterpart.
Abstract: In this paper, RSA encryption algorithm and its hardware
implementation in Xilinx-s Virtex Field Programmable Gate
Arrays (FPGA) is analyzed. The issues of scalability, flexible performance,
and silicon efficiency for the hardware acceleration of
public key crypto systems are being explored in the present work.
Using techniques based on the interleaved math for exponentiation,
the proposed RSA calculation architecture is compared to existing
FPGA-based solutions for speed, FPGA utilization, and scalability.
The paper covers the RSA encryption algorithm, interleaved multiplication,
Miller Rabin algorithm for primality test, extended Euclidean
math, basic FPGA technology, and the implementation details of
the proposed RSA calculation architecture. Performance of several
alternative hardware architectures is discussed and compared. Finally,
conclusion is drawn, highlighting the advantages of a fully flexible
& parameterized design.
Abstract: In this paper, we are presenting a new type of pointing interface for computers which provides mouse functionalities with near surface haptic feedback. Further, it can be configured as a haptic display where users may feel the basic geometrical shapes in the GUI by moving the finger on top of the device surface. These functionalities are achieved by tracking three dimensional positions of the neodymium magnet using Hall Effect sensors grid and generating like polarity haptic feedback using an electromagnet array. This interface brings the haptic sensations to the 3D space where previously it is felt only on top of the buttons of the haptic mouse implementations.
Abstract: In this paper a hybrid technique of Genetic Algorithm
and Simulated Annealing (HGASA) is applied for Fractal Image
Compression (FIC). With the help of this hybrid evolutionary
algorithm effort is made to reduce the search complexity of matching
between range block and domain block. The concept of Simulated
Annealing (SA) is incorporated into Genetic Algorithm (GA) in order
to avoid pre-mature convergence of the strings. One of the image
compression techniques in the spatial domain is Fractal Image
Compression but the main drawback of FIC is that it involves more
computational time due to global search. In order to improve the
computational time along with acceptable quality of the decoded
image, HGASA technique has been proposed. Experimental results
show that the proposed HGASA is a better method than GA in terms
of PSNR for Fractal image Compression.
Abstract: Texture classification is a trendy and a catchy
technology in the field of texture analysis. Textures, the repeated
patterns, have different frequency components along different
orientations. Our work is based on Texture Classification and its
applications. It finds its applications in various fields like Medical
Image Classification, Computer Vision, Remote Sensing,
Agricultural Field, and Textile Industry. Weed control has a major
effect on agriculture. A large amount of herbicide has been used for
controlling weeds in agriculture fields, lawns, golf courses, sport
fields, etc. Random spraying of herbicides does not meet the exact
requirement of the field. Certain areas in field have more weed
patches than estimated. So, we need a visual system that can
discriminate weeds from the field image which will reduce or even
eliminate the amount of herbicide used. This would allow farmers to
not use any herbicides or only apply them where they are needed. A
machine vision precision automated weed control system could
reduce the usage of chemicals in crop fields. In this paper, an
intelligent system for automatic weeding strategy Multi Resolution
Combined Statistical & spatial Frequency is used to discriminate the
weeds from the crops and to classify them as narrow, little and broad
weeds.
Abstract: We introduce a new interactive 3D simulator of ocular motion and expressions suitable for: (1) character animation applications to game design, film production, HCI (Human Computer Interface), conversational animated agents, and virtual reality; (2) medical applications (ophthalmic neurological and muscular pathologies: research and education); and (3) real time simulation of unconscious cognitive and emotional responses (for use, e.g., in psychological research). Using state-of-the-art computer animation technology we have modeled and rigged a physiologically accurate 3D model of the eyes, eyelids, and eyebrow regions and we have 'optimized' it for use with an interactive and web deliverable platform. In addition, we have realized a prototype device for realtime control of eye motions and expressions, including unconsciously produced expressions, for application as in (1), (2), and (3) above. The 3D simulator of eye motion and ocular expression is, to our knowledge, the most advanced/realistic available so far for applications in character animation and medical pedagogy.