Abstract: Natural Language Understanding Systems (NLU) will not be widely deployed unless they are technically mature and cost effective to develop. Cost effective development hinges on the availability of tools and techniques enabling the rapid production of NLU applications through minimal human resources. Further, these tools and techniques should allow quick development of applications in a user friendly way and should be easy to upgrade in order to continuously follow the evolving technologies and standards. This paper presents a visual tool for the structuring and editing of dialog forms, the key element of driving conversation in NLU applications based on IBM technology. The main focus is given on the basic component used to describe Human – Machine interactions of that kind, the Dialogue Manager. In essence, the description of a tool that enables the visual representation of the Dialogue Manager mainly during the implementation phase is illustrated.
Abstract: This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.
Abstract: The paper shows the necessity to increase the security
level for paper management in the cadastral field by using specific
graphical watermarks. Using the graphical watermarking will
increase the security in the cadastral content management;
furthermore any altered document will be validated afterwards of its
originality by checking the graphic watermark. If, by any reasons the
document is changed for counterfeiting, it is invalidated and found
that is an illegal copy due to the graphic check of the watermarking,
check made at pixel level
Abstract: The running logs of a process hold valuable
information about its executed activity behavior and generated activity
logic structure. Theses informative logs can be extracted, analyzed and
utilized to improve the efficiencies of the process's execution and
conduction. One of the techniques used to accomplish the process
improvement is called as process mining. To mine similar processes is
such an improvement mission in process mining. Rather than directly
mining similar processes using a single comparing coefficient or a
complicate fitness function, this paper presents a simplified heuristic
process mining algorithm with two similarity comparisons that are
able to relatively conform the activity logic sequences (traces) of
mining processes with those of a normalized (regularized) one. The
relative process conformance is to find which of the mining processes
match the required activity sequences and relationships, further for
necessary and sufficient applications of the mined processes to process
improvements. One similarity presented is defined by the relationships
in terms of the number of similar activity sequences existing in
different processes; another similarity expresses the degree of the
similar (identical) activity sequences among the conforming processes.
Since these two similarities are with respect to certain typical behavior
(activity sequences) occurred in an entire process, the common
problems, such as the inappropriateness of an absolute comparison and
the incapability of an intrinsic information elicitation, which are often
appeared in other process conforming techniques, can be solved by the
relative process comparison presented in this paper. To demonstrate
the potentiality of the proposed algorithm, a numerical example is
illustrated.
Abstract: We developed a GPS-based navigation device for the
blind, with audio guidance in Thai language. The device is composed
of simple and inexpensive hardware components. Its user interface is
quite simple. It determines optimal routes to various landmarks in our
university campus by using heuristic search for the next waypoints.
We tested the device and made note of its limitations and possible
extensions.
Abstract: Using bottom-up image processing algorithms to predict human eye fixations and extract the relevant embedded information in images has been widely applied in the design of active machine vision systems. Scene text is an important feature to be extracted, especially in vision-based mobile robot navigation as many potential landmarks such as nameplates and information signs contain text. This paper proposes an edge-based text region extraction algorithm, which is robust with respect to font sizes, styles, color/intensity, orientations, and effects of illumination, reflections, shadows, perspective distortion, and the complexity of image backgrounds. Performance of the proposed algorithm is compared against a number of widely used text localization algorithms and the results show that this method can quickly and effectively localize and extract text regions from real scenes and can be used in mobile robot navigation under an indoor environment to detect text based landmarks.
Abstract: Space Vector Modulation (SVM) is an optimum Pulse Width Modulation (PWM) technique for an inverter used in a variable frequency drive applications. It is computationally rigorous and hence limits the inverter switching frequency. Increase in switching frequency can be achieved using Neural Network (NN) based SVM, implemented on application specific chips. This paper proposes a neural network based SVM technique for a Voltage Source Inverter (VSI). The network proposed is independent of switching frequency. Different architectures are investigated keeping the total number of neurons constant. The performance of the inverter is compared for various switching frequencies for different architectures of NN based SVM. From the results obtained, the network with minimum resource and appropriate word length is identified. The bit precision required for this application is identified. The network with 8-bit precision is implemented in the IC XCV 400 and the results are presented. The performance of NN based general purpose SVM with higher bit precision is discussed.
Abstract: Computer animation is a widely adopted technique used to specify the movement of various objects on screen. The key issue of this technique is the specification of motion. Motion Control Methods are such methods which are used to specify the actions of objects. This paper discusses the various types of motion control methods with special focus on behavioral animation. A behavioral model is also proposed which takes into account the emotions and perceptions of an actor which in turn generate its behavior. This model makes use of an expert system to generate tasks for the actors which specify the actions to be performed in the virtual environment.
Abstract: We introduce a new interactive 3D simulator of ocular motion and expressions suitable for: (1) character animation applications to game design, film production, HCI (Human Computer Interface), conversational animated agents, and virtual reality; (2) medical applications (ophthalmic neurological and muscular pathologies: research and education); and (3) real time simulation of unconscious cognitive and emotional responses (for use, e.g., in psychological research). Using state-of-the-art computer animation technology we have modeled and rigged a physiologically accurate 3D model of the eyes, eyelids, and eyebrow regions and we have 'optimized' it for use with an interactive and web deliverable platform. In addition, we have realized a prototype device for realtime control of eye motions and expressions, including unconsciously produced expressions, for application as in (1), (2), and (3) above. The 3D simulator of eye motion and ocular expression is, to our knowledge, the most advanced/realistic available so far for applications in character animation and medical pedagogy.
Abstract: Texture classification is a trendy and a catchy
technology in the field of texture analysis. Textures, the repeated
patterns, have different frequency components along different
orientations. Our work is based on Texture Classification and its
applications. It finds its applications in various fields like Medical
Image Classification, Computer Vision, Remote Sensing,
Agricultural Field, and Textile Industry. Weed control has a major
effect on agriculture. A large amount of herbicide has been used for
controlling weeds in agriculture fields, lawns, golf courses, sport
fields, etc. Random spraying of herbicides does not meet the exact
requirement of the field. Certain areas in field have more weed
patches than estimated. So, we need a visual system that can
discriminate weeds from the field image which will reduce or even
eliminate the amount of herbicide used. This would allow farmers to
not use any herbicides or only apply them where they are needed. A
machine vision precision automated weed control system could
reduce the usage of chemicals in crop fields. In this paper, an
intelligent system for automatic weeding strategy Multi Resolution
Combined Statistical & spatial Frequency is used to discriminate the
weeds from the crops and to classify them as narrow, little and broad
weeds.
Abstract: In this paper a hybrid technique of Genetic Algorithm
and Simulated Annealing (HGASA) is applied for Fractal Image
Compression (FIC). With the help of this hybrid evolutionary
algorithm effort is made to reduce the search complexity of matching
between range block and domain block. The concept of Simulated
Annealing (SA) is incorporated into Genetic Algorithm (GA) in order
to avoid pre-mature convergence of the strings. One of the image
compression techniques in the spatial domain is Fractal Image
Compression but the main drawback of FIC is that it involves more
computational time due to global search. In order to improve the
computational time along with acceptable quality of the decoded
image, HGASA technique has been proposed. Experimental results
show that the proposed HGASA is a better method than GA in terms
of PSNR for Fractal image Compression.
Abstract: In this paper, we are presenting a new type of pointing interface for computers which provides mouse functionalities with near surface haptic feedback. Further, it can be configured as a haptic display where users may feel the basic geometrical shapes in the GUI by moving the finger on top of the device surface. These functionalities are achieved by tracking three dimensional positions of the neodymium magnet using Hall Effect sensors grid and generating like polarity haptic feedback using an electromagnet array. This interface brings the haptic sensations to the 3D space where previously it is felt only on top of the buttons of the haptic mouse implementations.
Abstract: In this paper, RSA encryption algorithm and its hardware
implementation in Xilinx-s Virtex Field Programmable Gate
Arrays (FPGA) is analyzed. The issues of scalability, flexible performance,
and silicon efficiency for the hardware acceleration of
public key crypto systems are being explored in the present work.
Using techniques based on the interleaved math for exponentiation,
the proposed RSA calculation architecture is compared to existing
FPGA-based solutions for speed, FPGA utilization, and scalability.
The paper covers the RSA encryption algorithm, interleaved multiplication,
Miller Rabin algorithm for primality test, extended Euclidean
math, basic FPGA technology, and the implementation details of
the proposed RSA calculation architecture. Performance of several
alternative hardware architectures is discussed and compared. Finally,
conclusion is drawn, highlighting the advantages of a fully flexible
& parameterized design.
Abstract: In this paper, an image adaptive, invisible digital
watermarking algorithm with Orthogonal Polynomials based
Transformation (OPT) is proposed, for copyright protection of digital
images. The proposed algorithm utilizes a visual model to determine
the watermarking strength necessary to invisibly embed the
watermark in the mid frequency AC coefficients of the cover image,
chosen with a secret key. The visual model is designed to generate a
Just Noticeable Distortion mask (JND) by analyzing the low level
image characteristics such as textures, edges and luminance of the
cover image in the orthogonal polynomials based transformation
domain. Since the secret key is required for both embedding and
extraction of watermark, it is not possible for an unauthorized user to
extract the embedded watermark. The proposed scheme is robust to
common image processing distortions like filtering, JPEG
compression and additive noise. Experimental results show that the
quality of OPT domain watermarked images is better than its DCT
counterpart.
Abstract: Skin color is an important visual cue for computer
vision systems involving human users. In this paper we combine skin
color and optical flow for detection and tracking of skin regions. We
apply these techniques to gesture recognition with encouraging
results. We propose a novel skin similarity measure. For grouping
detected skin regions we propose a novel skin region grouping
mechanism. The proposed techniques work with any number of skin
regions making them suitable for a multiuser scenario.
Abstract: Fixed-point simulation results are used for the performance measure of inverting matrices using a reconfigurable processing element. Matrices are inverted using the Cholesky decomposition algorithm. The reconfigurable processing element is capable of all required mathematical operations. The fixed-point word length analysis is based on simulations of different condition numbers and different matrix sizes.
Abstract: This article concerns the presentation of an integrated
method for detection of steganographic content embedded by new
unknown programs. The method is based on data mining and
aggregated hypothesis testing. The article contains the theoretical
basics used to deploy the proposed detection system and the
description of improvement proposed for the basic system idea.
Further main results of experiments and implementation details are
collected and described. Finally example results of the tests are
presented.
Abstract: This work presents a neural network model for the
clustering analysis of data based on Self Organizing Maps (SOM).
The model evolves during the training stage towards a hierarchical
structure according to the input requirements. The hierarchical structure
symbolizes a specialization tool that provides refinements of the
classification process. The structure behaves like a single map with
different resolutions depending on the region to analyze. The benefits
and performance of the algorithm are discussed in application to the
Iris dataset, a classical example for pattern recognition.
Abstract: This paper proposes a new technique for improving
the efficiency of software testing, which is based on a conventional
attempt to reduce test cases that have to be tested for any given
software. The approach utilizes the advantage of Regression Testing
where fewer test cases would lessen time consumption of the testing
as a whole. The technique also offers a means to perform test case
generation automatically. Compared to one of the techniques in the
literature where the tester has no option but to perform the test case
generation manually, the proposed technique provides a better
option. As for the test cases reduction, the technique uses simple
algebraic conditions to assign fixed values to variables (Maximum,
minimum and constant variables). By doing this, the variables values
would be limited within a definite range, resulting in fewer numbers
of possible test cases to process. The technique can also be used in
program loops and arrays.
Abstract: In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.