Abstract: Eating a meal is among the Activities of Daily Living,
but it takes a lot of time and effort for people with physical
or functional limitations. Dedicated technologies are cumbersome
and not portable, while general-purpose assistive robots such as
wheelchair-based manipulators are too hard to control for elaborate
continuous motion like eating. Eating with such devices has not
previously been automated, since there existed no description of
a feeding motion for uncontrolled environments. In this paper, we
introduce a feeding mode for assistive manipulators, including a
mathematical description of trajectories for motions that are difficult
to perform manually such as gathering and scooping food at a
defined/desired pace. We implement these trajectories in a sequence
of movements for a semi-automated feeding mode which can be
controlled with a very simple 3-button interface, allowing the user
to have control over the feeding pace. Finally, we demonstrate the
feeding mode with a JACO robotic arm and compare the eating
speed, measured in bites per minute of three eating methods: a
healthy person eating unaided, a person with upper limb limitations
or disability using JACO with manual control, and a person with
limitations using JACO with the feeding mode. We found that the
feeding mode allows eating about 5 bites per minute, which should
be sufficient to eat a meal under 30min.
Abstract: In a single case study, we show how a conversation analysis (CA) approach can shed light onto the sequential unfolding of human-robot interaction. Relying on video data, we are able to show that CA allows us to investigate the respective turn-taking systems of humans and a NAO robot in their dialogical dynamics, thus pointing out relevant differences. Our fine grained video analysis points out occurring breakdowns and their overcoming, when humans and a NAO-robot engage in a multimodally uttered multi-party communication during a sports guessing game. Our findings suggest that interdisciplinary work opens up the opportunity to gain new insights into the challenging issues of human robot communication in order to provide resources for developing mechanisms that enable complex human-robot interaction (HRI).
Abstract: This paper addresses the development of an intelligent vision system for human-robot interaction. The two novel contributions of this paper are 1) Detection of human faces and 2) Localizing the eye. The method is based on visual attributes of human skin colors and geometrical analysis of face skeleton. This paper introduces a spatial domain filtering method named ?Fuzzily skewed filter' which incorporates Fuzzy rules for deciding the gray level of pixels in the image in their neighborhoods and takes advantages of both the median and averaging filters. The effectiveness of the method has been justified over implementing the eye tracking commands to an entertainment robot, named ''AIBO''.