Abstract: Hand gesture is an active area of research in the vision
community, mainly for the purpose of sign language recognition and
Human Computer Interaction. In this paper, we propose a system to
recognize alphabet characters (A-Z) and numbers (0-9) in real-time
from stereo color image sequences using Hidden Markov Models
(HMMs). Our system is based on three main stages; automatic segmentation
and preprocessing of the hand regions, feature extraction
and classification. In automatic segmentation and preprocessing stage,
color and 3D depth map are used to detect hands where the hand
trajectory will take place in further step using Mean-shift algorithm
and Kalman filter. In the feature extraction stage, 3D combined features
of location, orientation and velocity with respected to Cartesian
systems are used. And then, k-means clustering is employed for
HMMs codeword. The final stage so-called classification, Baum-
Welch algorithm is used to do a full train for HMMs parameters.
The gesture of alphabets and numbers is recognized using Left-Right
Banded model in conjunction with Viterbi algorithm. Experimental
results demonstrate that, our system can successfully recognize hand
gestures with 98.33% recognition rate.
Abstract: A word recognition architecture based on a network
of neural associative memories and hidden Markov models has been
developed. The input stream, composed of subword-units like wordinternal
triphones consisting of diphones and triphones, is provided
to the network of neural associative memories by hidden Markov
models. The word recognition network derives words from this input
stream. The architecture has the ability to handle ambiguities on
subword-unit level and is also able to add new words to the
vocabulary during performance. The architecture is implemented to
perform the word recognition task in a language processing system
for understanding simple command sentences like “bot show apple".
Abstract: The motivation for adaptive modulation and coding is
to adjust the method of transmission to ensure that the maximum
efficiency is achieved over the link at all times. The receiver
estimates the channel quality and reports it back to the transmitter.
The transmitter then maps the reported quality into a link mode. This
mapping however, is not a one-to-one mapping. In this paper we
investigate a method for selecting the proper modulation scheme.
This method can dynamically adapt the mapping of the Signal-to-
Noise Ratio (SNR) into a link mode. It enables the use of the right
modulation scheme irrespective of changes in the channel conditions
by incorporating errors in the received data. We propose a Markov
model for this method, and use it to derive the average switching
thresholds and the average throughput. We show that the average
throughput of this method outperforms the conventional threshold
method.
Abstract: Gesture recognition is a challenging task for extracting
meaningful gesture from continuous hand motion. In this paper, we propose an automatic system that recognizes isolated gesture,
in addition meaningful gesture from continuous hand motion for Arabic numbers from 0 to 9 in real-time based on Hidden Markov Models (HMM). In order to handle isolated gesture, HMM using
Ergodic, Left-Right (LR) and Left-Right Banded (LRB) topologies is applied over the discrete vector feature that is extracted from stereo
color image sequences. These topologies are considered to different
number of states ranging from 3 to 10. A new system is developed to recognize the meaningful gesture based on zero-codeword detection
with static velocity motion for continuous gesture. Therefore, the
LRB topology in conjunction with Baum-Welch (BW) algorithm for
training and forward algorithm with Viterbi path for testing presents the best performance. Experimental results show that the proposed system can successfully recognize isolated and meaningful gesture and achieve average rate recognition 98.6% and 94.29% respectively.
Abstract: This paper presents a technical speaker adaptation
method called WMLLR, which is based on maximum likelihood linear
regression (MLLR). In MLLR, a linear regression-based transform
which adapted the HMM mean vectors was calculated to maximize the
likelihood of adaptation data. In this paper, the prior knowledge of the
initial model is adequately incorporated into the adaptation. A series of
speaker adaptation experiments are carried out at a 30 famous city
names database to investigate the efficiency of the proposed method.
Experimental results show that the WMLLR method outperforms the
conventional MLLR method, especially when only few utterances
from a new speaker are available for adaptation.
Abstract: In this paper we present an efficient system for
independent speaker speech recognition based on neural network
approach. The proposed architecture comprises two phases: a
preprocessing phase which consists in segmental normalization and
features extraction and a classification phase which uses neural
networks based on nonparametric density estimation namely the
general regression neural network (GRNN). The relative
performances of the proposed model are compared to the similar
recognition systems based on the Multilayer Perceptron (MLP), the
Recurrent Neural Network (RNN) and the well known Discrete
Hidden Markov Model (HMM-VQ) that we have achieved also.
Experimental results obtained with Arabic digits have shown that the
use of nonparametric density estimation with an appropriate
smoothing factor (spread) improves the generalization power of the
neural network. The word error rate (WER) is reduced significantly
over the baseline HMM method. GRNN computation is a successful
alternative to the other neural network and DHMM.