Abstract: Computacional recognition of sign languages aims to
allow a greater social and digital inclusion of deaf people through
interpretation of their language by computer. This article presents
a model of recognition of two of global parameters from sign
languages; hand configurations and hand movements. Hand motion
is captured through an infrared technology and its joints are built
into a virtual three-dimensional space. A Multilayer Perceptron
Neural Network (MLP) was used to classify hand configurations and
Dynamic Time Warping (DWT) recognizes hand motion. Beyond
of the method of sign recognition, we provide a dataset of
hand configurations and motion capture built with help of fluent
professionals in sign languages. Despite this technology can be
used to translate any sign from any signs dictionary, Brazilian
Sign Language (Libras) was used as case study. Finally, the model
presented in this paper achieved a recognition rate of 80.4%.
Abstract: In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFTÂ LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.
Abstract: An adaptive Chinese hand-talking system is presented
in this paper. By analyzing the 3 data collecting strategies for new
users, the adaptation framework including supervised and unsupervised
adaptation methods is proposed. For supervised adaptation,
affinity propagation (AP) is used to extract exemplar subsets, and enhanced
maximum a posteriori / vector field smoothing (eMAP/VFS)
is proposed to pool the adaptation data among different models. For
unsupervised adaptation, polynomial segment models (PSMs) are
used to help hidden Markov models (HMMs) to accurately label
the unlabeled data, then the "labeled" data together with signerindependent
models are inputted to MAP algorithm to generate
signer-adapted models. Experimental results show that the proposed
framework can execute both supervised adaptation with small amount
of labeled data and unsupervised adaptation with large amount
of unlabeled data to tailor the original models, and both achieve
improvements on the performance of recognition rate.
Abstract: Sign language recognition has been a topic of research since the first data glove was developed. Many researchers have attempted to recognize sign language through various techniques. However none of them have ventured into the area of Pakistan Sign Language (PSL). The Boltay Haath project aims at recognizing PSL gestures using Statistical Template Matching. The primary input device is the DataGlove5 developed by 5DT. Alternative approaches use camera-based recognition which, being sensitive to environmental changes are not always a good choice.This paper explains the use of Statistical Template Matching for gesture recognition in Boltay Haath. The system recognizes one handed alphabet signs from PSL.
Abstract: Sign language is used by the deaf and hard of hearing people for communication. Automatic sign language recognition is a challenging research area since sign language often is the only way of communication for the deaf people. Sign language includes different components of visual actions made by the signer using the hands, the face, and the torso, to convey his/her meaning. To use different aspects of signs, we combine the different groups of features which have been extracted from the image frames recorded directly by a stationary camera. We combine the features in two levels by employing three techniques. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, or by concatenating feature groups over time and using LDA to choose the most discriminant elements. At the model level, a late fusion of differently trained models can be carried out by a log-linear model combination. In this paper, we investigate these three combination techniques in an automatic sign language recognition system and show that the recognition rate can be significantly improved.
Abstract: Hand gesture is an active area of research in the vision
community, mainly for the purpose of sign language recognition and
Human Computer Interaction. In this paper, we propose a system to
recognize alphabet characters (A-Z) and numbers (0-9) in real-time
from stereo color image sequences using Hidden Markov Models
(HMMs). Our system is based on three main stages; automatic segmentation
and preprocessing of the hand regions, feature extraction
and classification. In automatic segmentation and preprocessing stage,
color and 3D depth map are used to detect hands where the hand
trajectory will take place in further step using Mean-shift algorithm
and Kalman filter. In the feature extraction stage, 3D combined features
of location, orientation and velocity with respected to Cartesian
systems are used. And then, k-means clustering is employed for
HMMs codeword. The final stage so-called classification, Baum-
Welch algorithm is used to do a full train for HMMs parameters.
The gesture of alphabets and numbers is recognized using Left-Right
Banded model in conjunction with Viterbi algorithm. Experimental
results demonstrate that, our system can successfully recognize hand
gestures with 98.33% recognition rate.