Abstract: The use of human hand as a natural interface for humancomputer interaction (HCI) serves as the motivation for research in hand gesture recognition. Vision-based hand gesture recognition involves visual analysis of hand shape, position and/or movement. In this paper, we use the concept of object-based video abstraction for segmenting the frames into video object planes (VOPs), as used in MPEG-4, with each VOP corresponding to one semantically meaningful hand position. Next, the key VOPs are selected on the basis of the amount of change in hand shape – for a given key frame in the sequence the next key frame is the one in which the hand changes its shape significantly. Thus, an entire video clip is transformed into a small number of representative frames that are sufficient to represent a gesture sequence. Subsequently, we model a particular gesture as a sequence of key frames each bearing information about its duration. These constitute a finite state machine. For recognition, the states of the incoming gesture sequence are matched with the states of all different FSMs contained in the database of gesture vocabulary. The core idea of our proposed representation is that redundant frames of the gesture video sequence bear only the temporal information of a gesture and hence discarded for computational efficiency. Experimental results obtained demonstrate the effectiveness of our proposed scheme for key frame extraction, subsequent gesture summarization and finally gesture recognition.
Abstract: This paper addresses the problem of recognizing and
interpreting the behavior of human workers in industrial
environments for the purpose of integrating humans in software
controlled manufacturing environments. In this work we propose a
generic concept in order to derive solutions for task-related manual
production applications. Thus, we are able to use a versatile concept
providing flexible components and being less restricted to a specific
problem or application. We instantiate our concept in a spot welding
scenario in which the behavior of a human worker is interpreted
when performing a welding task with a hand welding gun. We
acquire signals from inertial sensors, video cameras and triggers and
recognize atomic actions by using pose data from a marker based
video tracking system and movement data from inertial sensors.
Recognized atomic actions are analyzed on a higher evaluation level
by a finite state machine.
Abstract: This article concerned with the translation of Quranic
verses to Braille symbols, by using Visual basic program. The
system has the ability to translate the special vibration for the Quran.
This study limited for the (Noun + Scoon) vibrations. It builds on an
existing translation system that combines a finite state machine with
left and right context matching and a set of translation rules. This
allows to translate the Arabic language from text to Braille symbols
after detect the vibration for the Quran verses.