Abstract: Computer animation is a widely adopted technique used to specify the movement of various objects on screen. The key issue of this technique is the specification of motion. Motion Control Methods are such methods which are used to specify the actions of objects. This paper discusses the various types of motion control methods with special focus on behavioral animation. A behavioral model is also proposed which takes into account the emotions and perceptions of an actor which in turn generate its behavior. This model makes use of an expert system to generate tasks for the actors which specify the actions to be performed in the virtual environment.
Abstract: In this document we studied more in detail the Performances of the vertical handover in the networks WLAN, WiMAX, UMTS before studying of it the Procedure of Handoff Vertical, the whole buckled by simulations putting forward the performances of the handover in the heterogeneous networks. The goal of Vertical Handover is to carry out several accesses in real-time in the heterogeneous networks. This makes it possible a user to use several networks (such as WLAN UMTS andWiMAX) in parallel, and the system to commutate automatically at another basic station, without disconnecting itself, as if there were no cut and with little loss of data as possible.
Abstract: Space Vector Modulation (SVM) is an optimum Pulse Width Modulation (PWM) technique for an inverter used in a variable frequency drive applications. It is computationally rigorous and hence limits the inverter switching frequency. Increase in switching frequency can be achieved using Neural Network (NN) based SVM, implemented on application specific chips. This paper proposes a neural network based SVM technique for a Voltage Source Inverter (VSI). The network proposed is independent of switching frequency. Different architectures are investigated keeping the total number of neurons constant. The performance of the inverter is compared for various switching frequencies for different architectures of NN based SVM. From the results obtained, the network with minimum resource and appropriate word length is identified. The bit precision required for this application is identified. The network with 8-bit precision is implemented in the IC XCV 400 and the results are presented. The performance of NN based general purpose SVM with higher bit precision is discussed.
Abstract: Using bottom-up image processing algorithms to predict human eye fixations and extract the relevant embedded information in images has been widely applied in the design of active machine vision systems. Scene text is an important feature to be extracted, especially in vision-based mobile robot navigation as many potential landmarks such as nameplates and information signs contain text. This paper proposes an edge-based text region extraction algorithm, which is robust with respect to font sizes, styles, color/intensity, orientations, and effects of illumination, reflections, shadows, perspective distortion, and the complexity of image backgrounds. Performance of the proposed algorithm is compared against a number of widely used text localization algorithms and the results show that this method can quickly and effectively localize and extract text regions from real scenes and can be used in mobile robot navigation under an indoor environment to detect text based landmarks.
Abstract: We developed a GPS-based navigation device for the
blind, with audio guidance in Thai language. The device is composed
of simple and inexpensive hardware components. Its user interface is
quite simple. It determines optimal routes to various landmarks in our
university campus by using heuristic search for the next waypoints.
We tested the device and made note of its limitations and possible
extensions.
Abstract: The running logs of a process hold valuable
information about its executed activity behavior and generated activity
logic structure. Theses informative logs can be extracted, analyzed and
utilized to improve the efficiencies of the process's execution and
conduction. One of the techniques used to accomplish the process
improvement is called as process mining. To mine similar processes is
such an improvement mission in process mining. Rather than directly
mining similar processes using a single comparing coefficient or a
complicate fitness function, this paper presents a simplified heuristic
process mining algorithm with two similarity comparisons that are
able to relatively conform the activity logic sequences (traces) of
mining processes with those of a normalized (regularized) one. The
relative process conformance is to find which of the mining processes
match the required activity sequences and relationships, further for
necessary and sufficient applications of the mined processes to process
improvements. One similarity presented is defined by the relationships
in terms of the number of similar activity sequences existing in
different processes; another similarity expresses the degree of the
similar (identical) activity sequences among the conforming processes.
Since these two similarities are with respect to certain typical behavior
(activity sequences) occurred in an entire process, the common
problems, such as the inappropriateness of an absolute comparison and
the incapability of an intrinsic information elicitation, which are often
appeared in other process conforming techniques, can be solved by the
relative process comparison presented in this paper. To demonstrate
the potentiality of the proposed algorithm, a numerical example is
illustrated.
Abstract: The paper shows the necessity to increase the security
level for paper management in the cadastral field by using specific
graphical watermarks. Using the graphical watermarking will
increase the security in the cadastral content management;
furthermore any altered document will be validated afterwards of its
originality by checking the graphic watermark. If, by any reasons the
document is changed for counterfeiting, it is invalidated and found
that is an illegal copy due to the graphic check of the watermarking,
check made at pixel level
Abstract: It is expected that ubiquitous era will come soon. A ubiquitous environment has features like peer-to-peer and nomadic environments. Such features can be represented by peer-to-peer systems and mobile ad-hoc networks (MANETs). The features of P2P systems and MANETs are similar, appealing for implementing P2P systems in MANET environment. It has been shown that, however, the performance of the P2P systems designed for wired networks do not perform satisfactorily in mobile ad-hoc environment. Subsequently, this paper proposes a method to improve P2P performance using cross-layer design and the goodness of a node as a peer. The proposed method uses routing metric as well as P2P metric to choose favorable peers to connect. It also utilizes proactive approach for distributing peer information. According to the simulation results, the proposed method provides higher query success rate, shorter query response time and less energy consumption by constructing an efficient overlay network.
Abstract: This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.
Abstract: Natural Language Understanding Systems (NLU) will not be widely deployed unless they are technically mature and cost effective to develop. Cost effective development hinges on the availability of tools and techniques enabling the rapid production of NLU applications through minimal human resources. Further, these tools and techniques should allow quick development of applications in a user friendly way and should be easy to upgrade in order to continuously follow the evolving technologies and standards. This paper presents a visual tool for the structuring and editing of dialog forms, the key element of driving conversation in NLU applications based on IBM technology. The main focus is given on the basic component used to describe Human – Machine interactions of that kind, the Dialogue Manager. In essence, the description of a tool that enables the visual representation of the Dialogue Manager mainly during the implementation phase is illustrated.
Abstract: Wireless capsule Endoscopy (WCE) has rapidly
shown its wide applications in medical domain last ten years
thanks to its noninvasiveness for patients and support for thorough
inspection through a patient-s entire digestive system including
small intestine. However, one of the main barriers to efficient
clinical inspection procedure is that it requires large amount of
effort for clinicians to inspect huge data collected during the
examination, i.e., over 55,000 frames in video. In this paper, we
propose a method to compute meaningful motion changes of
WCE by analyzing the obtained video frames based on regional
optical flow estimations. The computed motion vectors are used to
remove duplicate video frames caused by WCE-s imaging nature,
such as repetitive forward-backward motions from peristaltic
movements. The motion vectors are derived by calculating
directional component vectors in four local regions. Our
experiments are performed on small intestine area, which is of
main interest to clinical experts when using WCEs, and our
experimental results show significant frame reductions comparing
with a simple frame-to-frame similarity-based image reduction
method.
Abstract: Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Abstract: In this paper, we proposed a method to design a
model-following adaptive controller for linear/nonlinear plants.
Radial basis function neural networks (RBF-NNs), which are known
for their stable learning capability and fast training, are used to
identify linear/nonlinear plants. Simulation results show that the
proposed method is effective in controlling both linear and nonlinear
plants with disturbance in the plant input.
Abstract: Induction motors are being used in greater numbers
throughout a wide variety of industrial and commercial applications
because it provides many benefits and reliable device to convert the
electrical energy into mechanical motion. In some application it-s
desired to control the speed of the induction motor. Because of the
physics of the induction motor the preferred method of controlling its
speed is to vary the frequency of the AC voltage driving the motor. In
recent years, with the microcontroller incorporated into an appliance
it becomes possible to use it to generate the variable frequency AC
voltage to control the speed of the induction motor.
This study investigates the microcontroller based variable
frequency power inverter. the microcontroller is provide the variable
frequency pulse width modulation (PWM) signal that control the
applied voltage on the gate drive, which is provides the required
PWM frequency with less harmonics at the output of the power
inverter.
The fully controlled bridge voltage source inverter has been
implemented with semiconductors power devices isolated gate
bipolar transistor (IGBT), and the PWM technique has been
employed in this inverter to supply the motor with AC voltage.
The proposed drive system for three & single phase power inverter
is simulated using Matlab/Simulink. The Matlab Simulation Results
for the proposed system were achieved with different SPWM. From
the result a stable variable frequency inverter over wide range has
been obtained and a good agreement has been found between the
simulation and hardware of a microcontroller based single phase
inverter.
Abstract: The evaluation and measurement of human body
dimensions are achieved by physical anthropometry. This research
was conducted in view of the importance of anthropometric indices
of the face in forensic medicine, surgery, and medical imaging. The
main goal of this research is to optimization of facial feature point by
establishing a mathematical relationship among facial features and
used optimize feature points for age classification. Since selected
facial feature points are located to the area of mouth, nose, eyes and
eyebrow on facial images, all desire facial feature points are extracted
accurately. According this proposes method; sixteen Euclidean
distances are calculated from the eighteen selected facial feature
points vertically as well as horizontally. The mathematical
relationships among horizontal and vertical distances are established.
Moreover, it is also discovered that distances of the facial feature
follows a constant ratio due to age progression. The distances
between the specified features points increase with respect the age
progression of a human from his or her childhood but the ratio of the
distances does not change (d = 1 .618 ) . Finally, according to the
proposed mathematical relationship four independent feature
distances related to eight feature points are selected from sixteen
distances and eighteen feature point-s respectively. These four feature
distances are used for classification of age using Support Vector
Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm
and shown around 96 % accuracy. Experiment result shows the
proposed system is effective and accurate for age classification.
Abstract: Effective evaluation of software development effort is an important issue during project plan. This study provides a model to predict development effort based on the software size estimated with function points. We generalize the average amount of effort spent on each phase of the development, and give the estimates for the effort used in software building, testing, and implementation. Finally, this paper finds a strong correlation between software defects and software size. As the size of software constantly increases, the quality remains to be a matter which requires major concern.
Abstract: This paper demonstrates how the soft systems
methodology can be used to improve the delivery of a module in data warehousing for fourth year information technology students.
Graduates in information technology needs to have academic skills
but also needs to have good practical skills to meet the skills requirements of the information technology industry. In developing
and improving current data warehousing education modules one has to find a balance in meeting the expectations of various role players such as the students themselves, industry and academia. The soft
systems methodology, developed by Peter Checkland, provides a
methodology for facilitating problem understanding from different world views. In this paper it is demonstrated how the soft systems methodology can be used to plan the improvement of data
warehousing education for fourth year information technology students.
Abstract: In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.
Abstract: Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.
Abstract: This paper presents a new STAKCERT KDD
processes for worm detection. The enhancement introduced in the
data-preprocessing resulted in the formation of a new STAKCERT
model for worm detection. In this paper we explained in detail how
all the processes involved in the STAKCERT KDD processes are
applied within the STAKCERT model for worm detection. Based on
the experiment conducted, the STAKCERT model yielded a 98.13%
accuracy rate for worm detection by integrating the STAKCERT
KDD processes.