Abstract: One of the main aims of current social robotic research
is to improve the robots’ abilities to interact with humans. In order
to achieve an interaction similar to that among humans, robots
should be able to communicate in an intuitive and natural way
and appropriately interpret human affects during social interactions.
Similarly to how humans are able to recognize emotions in other
humans, machines are capable of extracting information from the
various ways humans convey emotions—including facial expression,
speech, gesture or text—and using this information for improved
human computer interaction. This can be described as Affective
Computing, an interdisciplinary field that expands into otherwise
unrelated fields like psychology and cognitive science and involves
the research and development of systems that can recognize and
interpret human affects. To leverage these emotional capabilities
by embedding them in humanoid robots is the foundation of
the concept Affective Robots, which has the objective of making
robots capable of sensing the user’s current mood and personality
traits and adapt their behavior in the most appropriate manner
based on that. In this paper, the emotion recognition capabilities
of the humanoid robot Pepper are experimentally explored, based
on the facial expressions for the so-called basic emotions, as
well as how it performs in contrast to other state-of-the-art
approaches with both expression databases compiled in academic
environments and real subjects showing posed expressions as well
as spontaneous emotional reactions. The experiments’ results show
that the detection accuracy amongst the evaluated approaches differs
substantially. The introduced experiments offer a general structure
and approach for conducting such experimental evaluations. The
paper further suggests that the most meaningful results are obtained
by conducting experiments with real subjects expressing the emotions
as spontaneous reactions.
Abstract: This paper describes a preliminary work aimed at
setting a therapeutic support for autistic teenagers using three
humanoid robots NAO shared by ASD (Autism Spectrum Disorder)
subjects. The studied population had attended successfully a first
year program, and were observed with a second year program
using the robots. This paper focuses on the content and the effects
of the second year program. The approach is based on a master
puppet concept: the subjects program the robots, and use them as
an extension for communication. Twenty sessions were organized,
alternating ten preparatory sessions and ten robotics programming
sessions. During the preparatory sessions, the subjects write a story
to be played by the robots. During the robot programming sessions,
the subjects program the motions to be realized to make the robot
tell the story. The program was concluded by a public performance.
The experiment involves five ASD teenagers aged 12-15, who had
all attended the first year robotics training. As a result, a progress
in voluntary and organized communication skills of the five subjects
was observed, leading to improvements in social organization,
focus, voluntary communication, programming, reading and writing
abilities. The changes observed in the subjects general behavior
took place in a short time, and could be observed from one robotics
session to the next one. The approach allowed the subjects to
draw the limits of their body with respect to the environment, and
therefore helped them confronting the world with less anxiety.
Abstract: The paper focus on robotic telepresence system build
around humanoid robot operated with controller-less Wizard of Oz
technique. Proposed solution gives possibility to quick start acting as
a operator with short, if any, initial training.
Abstract: Imitation learning is considered to be an effective way of teaching humanoid robots and action recognition is the key step to imitation learning. In this paper an online algorithm to recognize
parametric actions with object context is presented. Objects are key instruments in understanding an action when there is uncertainty.
Ambiguities arising in similar actions can be resolved with objectn context. We classify actions according to the changes they make to
the object space. Actions that produce the same state change in the object movement space are classified to belong to the same class. This allow us to define several classes of actions where members of
each class are connected with a semantic interpretation.
Abstract: The recent development of humanoid robots has led robot designers to imagine a great variety of anthropomorphic forms for human-like machine. Which form is the best ? We try to answer this question from a double meaning of the anthropomorphism : a positive anthropomorphism corresponing to the realization of an effective anthropomorphic form object and a negative one corresponding to our natural tendency in certain circumstances to give human attributes to non-human beings. We postulate that any humanoid robot is concerned by both these two anthropomorphism kinds. We propose to use gestalt theory and Heider-s balance theory in order to analyze how negative anthropomorphism can influence our perception of human-like robots. From our theoretical approach we conclude that an “even shape" as defined by gestalt theory is not a sufficient condition for a good integration of future humanoid robots into a human community. Aesthetic perception of the robot cannot be splitted from a social perception : a humanoid robot, any how the efforts made for improving its appearance, could be rejected if it is devoted to a task with too high affective implications.