On-line Recognition of Isolated Gestures of Flight Deck Officers (FDO)

The paper presents an on-line recognition machine (RM) for continuous/isolated, dynamic and static gestures that arise in Flight Deck Officer (FDO) training. RM is based on generic pattern recognition framework. Gestures are represented as templates using summary statistics. The proposed recognition algorithm exploits temporal and spatial characteristics of gestures via dynamic programming and Markovian process. The algorithm predicts corresponding index of incremental input data in the templates in an on-line mode. Accumulated consistency in the sequence of prediction provides a similarity measurement (Score) between input data and the templates. The algorithm provides an intuitive mechanism for automatic detection of start/end frames of continuous gestures. In the present paper, we consider isolated gestures. The performance of RM is evaluated using four datasets - artificial (W TTest), hand motion (Yang) and FDO (tracker, vision-based ). RM achieves comparable results which are in agreement with other on-line and off-line algorithms such as hidden Markov model (HMM) and dynamic time warping (DTW). The proposed algorithm has the additional advantage of providing timely feedback for training purposes.

Pakistan Sign Language Recognition Using Statistical Template Matching

Sign language recognition has been a topic of research since the first data glove was developed. Many researchers have attempted to recognize sign language through various techniques. However none of them have ventured into the area of Pakistan Sign Language (PSL). The Boltay Haath project aims at recognizing PSL gestures using Statistical Template Matching. The primary input device is the DataGlove5 developed by 5DT. Alternative approaches use camera-based recognition which, being sensitive to environmental changes are not always a good choice.This paper explains the use of Statistical Template Matching for gesture recognition in Boltay Haath. The system recognizes one handed alphabet signs from PSL.

Trajectory Guided Recognition of Hand Gestures having only Global Motions

One very interesting field of research in Pattern Recognition that has gained much attention in recent times is Gesture Recognition. In this paper, we consider a form of dynamic hand gestures that are characterized by total movement of the hand (arm) in space. For these types of gestures, the shape of the hand (palm) during gesturing does not bear any significance. In our work, we propose a model-based method for tracking hand motion in space, thereby estimating the hand motion trajectory. We employ the dynamic time warping (DTW) algorithm for time alignment and normalization of spatio-temporal variations that exist among samples belonging to the same gesture class. During training, one template trajectory and one prototype feature vector are generated for every gesture class. Features used in our work include some static and dynamic motion trajectory features. Recognition is accomplished in two stages. In the first stage, all unlikely gesture classes are eliminated by comparing the input gesture trajectory to all the template trajectories. In the next stage, feature vector extracted from the input gesture is compared to all the class prototype feature vectors using a distance classifier. Experimental results demonstrate that our proposed trajectory estimator and classifier is suitable for Human Computer Interaction (HCI) platform.

Eye Gesture Analysis with Head Movement for Advanced Driver Assistance Systems

Road traffic accidents are a major cause of death worldwide. In an attempt to reduce accidents, some research efforts have focused on creating Advanced Driver Assistance Systems (ADAS) able to detect vehicle, driver and environmental conditions and to use this information to identify cues for potential accidents. This paper presents continued work on a novel Non-intrusive Intelligent Driver Assistance and Safety System (Ni-DASS) for assessing driver point of regard within vehicles. It uses an on-board CCD camera to observe the driver-s face. A template matching approach is used to compare the driver-s eye-gaze pattern with a set of eye-gesture templates of the driver looking at different focal points within the vehicle. The windscreen is divided into cells and comparison of the driver-s eye-gaze pattern with templates of a driver-s eyes looking at each cell is used to determine the driver-s point of regard on the windscreen. Results indicate that the proposed technique could be useful in situations where low resolution estimates of driver point of regard are adequate. For instance, To allow ADAS systems to alert the driver if he/she has positively failed to observe a hazard.

Treatment or Re-Victimizing the Victims

Severe symptoms, such as dissociation, depersonalization, self-mutilation, suicidal ideations and gestures, are the main reasons for a person to be diagnosed with Borderline Personality Disorder (BPD) and admitted to an inpatient Psychiatric Hospital. However, these symptoms are also indicators of a severe traumatic history as indicated by the extensive research on the topic. Unfortunately patients with such clinical presentation often are treated repeatedly only for their symptomatic behavior, while the main cause for their suffering, the trauma itself, is usually left unaddressed therapeutically. All of the highly structured, replicable, and manualized treatments lack the recognition of the uniqueness of the person and fail to respect his/her rights to experience and react in an idiosyncratic manner. Thus the communicative and adaptive meaning of such symptomatic behavior is missed. Only its pathological side is recognized and subjected to correction and stigmatization, and the message that the person is damaged goods that needs fixing is conveyed once again. However, this time the message would be even more convincing for the victim, because it is sent by mental health providers, who have the credibility to make such a judgment. The result is a revolving door of very expensive hospitalizations for only a temporary and patchy fix. In this way the patients, once victims of abuse and hardship are left invalidated and thus their re-victimization is perpetuated in their search for understanding and help. Keywordsborderline personality disorder (BPD), complex PTSD, integrative treatment of trauma, re-victimization of trauma victims.

Intention Recognition using a Graph Representation

The human friendly interaction is the key function of a human-centered system. Over the years, it has received much attention to develop the convenient interaction through intention recognition. Intention recognition processes multimodal inputs including speech, face images, and body gestures. In this paper, we suggest a novel approach of intention recognition using a graph representation called Intention Graph. A concept of valid intention is proposed, as a target of intention recognition. Our approach has two phases: goal recognition phase and intention recognition phase. In the goal recognition phase, we generate an action graph based on the observed actions, and then the candidate goals and their plans are recognized. In the intention recognition phase, the intention is recognized with relevant goals and user profile. We show that the algorithm has polynomial time complexity. The intention graph is applied to a simple briefcase domain to test our model.

FSM-based Recognition of Dynamic Hand Gestures via Gesture Summarization Using Key Video Object Planes

The use of human hand as a natural interface for humancomputer interaction (HCI) serves as the motivation for research in hand gesture recognition. Vision-based hand gesture recognition involves visual analysis of hand shape, position and/or movement. In this paper, we use the concept of object-based video abstraction for segmenting the frames into video object planes (VOPs), as used in MPEG-4, with each VOP corresponding to one semantically meaningful hand position. Next, the key VOPs are selected on the basis of the amount of change in hand shape – for a given key frame in the sequence the next key frame is the one in which the hand changes its shape significantly. Thus, an entire video clip is transformed into a small number of representative frames that are sufficient to represent a gesture sequence. Subsequently, we model a particular gesture as a sequence of key frames each bearing information about its duration. These constitute a finite state machine. For recognition, the states of the incoming gesture sequence are matched with the states of all different FSMs contained in the database of gesture vocabulary. The core idea of our proposed representation is that redundant frames of the gesture video sequence bear only the temporal information of a gesture and hence discarded for computational efficiency. Experimental results obtained demonstrate the effectiveness of our proposed scheme for key frame extraction, subsequent gesture summarization and finally gesture recognition.

Gesture Recognition by Data Fusion of Time-of-Flight and Color Cameras

In the last years numerous applications of Human- Computer Interaction have exploited the capabilities of Time-of- Flight cameras for achieving more and more comfortable and precise interactions. In particular, gesture recognition is one of the most active fields. This work presents a new method for interacting with a virtual object in a 3D space. Our approach is based on the fusion of depth data, supplied by a ToF camera, with color information, supplied by a HD webcam. The hand detection procedure does not require any learning phase and is able to concurrently manage gestures of two hands. The system is robust to the presence in the scene of other objects or people, thanks to the use of the Kalman filter for maintaining the tracking of the hands.

Development of Online Islamic Medication Expert System (OIMES)

This paper presents an overview of the design and implementation of an online rule-based Expert Systems for Islamic medication. T his Online Islamic Medication Expert System (OIMES) focuses on physical illnesses only. Knowledge base of this Expert System contains exhaustively the types of illness together with their related cures or treatments/therapies, obtained exclusively from the Quran and Hadith. Extensive research and study are conducted to ensure that the Expert System is able to provide the most suitable treatment with reference to the relevant verses cited in Quran or Hadith. These verses come together with their related 'actions' (bodily actions/gestures or some acts) to be performed by the patient to treat a particular illness/sickness. These verses and the instructions for the 'actions' are to be displayed unambiguously on the computer screen. The online platform provides the advantage for patient getting treatment practically anytime and anywhere as long as the computer and Internet facility exist. Patient does not need to make appointment to see an expert for a therapy.

Hand Gesture Recognition: Sign to Voice System (S2V)

Hand gesture is one of the typical methods used in sign language for non-verbal communication. It is most commonly used by people who have hearing or speech problems to communicate among themselves or with normal people. Various sign language systems have been developed by manufacturers around the globe but they are neither flexible nor cost-effective for the end users. This paper presents a system prototype that is able to automatically recognize sign language to help normal people to communicate more effectively with the hearing or speech impaired people. The Sign to Voice system prototype, S2V, was developed using Feed Forward Neural Network for two-sequence signs detection. Different sets of universal hand gestures were captured from video camera and utilized to train the neural network for classification purpose. The experimental results have shown that neural network has achieved satisfactory result for sign-to-voice translation.

Real-Time Hand Tracking and Gesture Recognition System Using Neural Networks

This paper introduces a hand gesture recognition system to recognize real time gesture in unstrained environments. Efforts should be made to adapt computers to our natural means of communication: Speech and body language. A simple and fast algorithm using orientation histograms will be developed. It will recognize a subset of MAL static hand gestures. A pattern recognition system will be using a transforrn that converts an image into a feature vector, which will be compared with the feature vectors of a training set of gestures. The final system will be Perceptron implementation in MATLAB. This paper includes experiments of 33 hand postures and discusses the results. Experiments shows that the system can achieve a 90% recognition average rate and is suitable for real time applications.

2-Dimensional Finger Gesture Based Mobile Robot Control Using Touch Screen

The purpose of this study was to present a reliable mean for human-computer interfacing based on finger gestures made in two dimensions, which could be interpreted and adequately used in controlling a remote robot's movement. The gestures were captured and interpreted using an algorithm based on trigonometric functions, in calculating the angular displacement from one point of touch to another as the user-s finger moved within a time interval; thereby allowing for pattern spotting of the captured gesture. In this paper the design and implementation of such a gesture based user interface was presented, utilizing the aforementioned algorithm. These techniques were then used to control a remote mobile robot's movement. A resistive touch screen was selected as the gesture sensor, then utilizing a programmed microcontroller to interpret them respectively.

Usability Evaluation Framework for Computer Vision Based Interfaces

Human computer interaction has progressed considerably from the traditional modes of interaction. Vision based interfaces are a revolutionary technology, allowing interaction through human actions, gestures. Researchers have developed numerous accurate techniques, however, with an exception to few these techniques are not evaluated using standard HCI techniques. In this paper we present a comprehensive framework to address this issue. Our evaluation of a computer vision application shows that in addition to the accuracy, it is vital to address human factors

Recognition Machine (RM) for On-line and Isolated Flight Deck Officer (FDO) Gestures

The paper presents an on-line recognition machine (RM) for continuous/isolated, dynamic and static gestures that arise in Flight Deck Officer (FDO) training. RM is based on generic pattern recognition framework. Gestures are represented as templates using summary statistics. The proposed recognition algorithm exploits temporal and spatial characteristics of gestures via dynamic programming and Markovian process. The algorithm predicts corresponding index of incremental input data in the templates in an on-line mode. Accumulated consistency in the sequence of prediction provides a similarity measurement (Score) between input data and the templates. The algorithm provides an intuitive mechanism for automatic detection of start/end frames of continuous gestures. In the present paper, we consider isolated gestures. The performance of RM is evaluated using four datasets - artificial (W TTest), hand motion (Yang) and FDO (tracker, vision-based ). RM achieves comparable results which are in agreement with other on-line and off-line algorithms such as hidden Markov model (HMM) and dynamic time warping (DTW). The proposed algorithm has the additional advantage of providing timely feedback for training purposes.

Hand Gesture Recognition Based on Combined Features Extraction

Hand gesture is an active area of research in the vision community, mainly for the purpose of sign language recognition and Human Computer Interaction. In this paper, we propose a system to recognize alphabet characters (A-Z) and numbers (0-9) in real-time from stereo color image sequences using Hidden Markov Models (HMMs). Our system is based on three main stages; automatic segmentation and preprocessing of the hand regions, feature extraction and classification. In automatic segmentation and preprocessing stage, color and 3D depth map are used to detect hands where the hand trajectory will take place in further step using Mean-shift algorithm and Kalman filter. In the feature extraction stage, 3D combined features of location, orientation and velocity with respected to Cartesian systems are used. And then, k-means clustering is employed for HMMs codeword. The final stage so-called classification, Baum- Welch algorithm is used to do a full train for HMMs parameters. The gesture of alphabets and numbers is recognized using Left-Right Banded model in conjunction with Viterbi algorithm. Experimental results demonstrate that, our system can successfully recognize hand gestures with 98.33% recognition rate.

Hand Gesture Recognition using Blob Detection for Immersive Projection Display System

We developed a vision interface immersive projection system, CAVE in virtual rea using hand gesture recognition with computer vis background image was subtracted from current webcam and we convert the color space of the imag Then we mask skin regions using skin color range t a noise reduction operation. We made blobs fro gestures were recognized using these blobs. Using recognition, we could implement an effective bothering devices for CAVE. e framework for an reality research field vision techniques. ent image frame age into HSV space. e threshold and apply from the image and ing our hand gesture e interface without

Virtual Gesture Screen System Based on 3D Visual Information and Multi-Layer Perceptron

Active research is underway on virtual touch screens that complement the physical limitations of conventional touch screens. This paper discusses a virtual touch screen that uses a multi-layer perceptron to recognize and control three-dimensional (3D) depth information from a time of flight (TOF) camera. This system extracts an object-s area from the image input and compares it with the trajectory of the object, which is learned in advance, to recognize gestures. The system enables the maneuvering of content in virtual space by utilizing human actions.