Abstract: OPEN_EmoRec_II is an open multimodal corpus with
experimentally induced emotions. In the first half of the experiment,
emotions were induced with standardized picture material and in the
second half during a human-computer interaction (HCI), realized
with a wizard-of-oz design. The induced emotions are based on the
dimensional theory of emotions (valence, arousal and dominance).
These emotional sequences - recorded with multimodal data (facial
reactions, speech, audio and physiological reactions) during a
naturalistic-like HCI-environment one can improve classification
methods on a multimodal level.
This database is the result of an HCI-experiment, for which 30
subjects in total agreed to a publication of their data including the
video material for research purposes*. The now available open
corpus contains sensory signal of: video, audio, physiology (SCL,
respiration, BVP, EMG Corrugator supercilii, EMG Zygomaticus
Major) and facial reactions annotations.
Abstract: OPEN_EmoRec_II is an open multimodal corpus with
experimentally induced emotions. In the first half of the experiment,
emotions were induced with standardized picture material and in the
second half during a human-computer interaction (HCI), realized
with a wizard-of-oz design. The induced emotions are based on the
dimensional theory of emotions (valence, arousal and dominance).
These emotional sequences - recorded with multimodal data (facial
reactions, speech, audio and physiological reactions) during a
naturalistic-like HCI-environment one can improve classification
methods on a multimodal level.
This database is the result of an HCI-experiment, for which 30
subjects in total agreed to a publication of their data including the
video material for research purposes*. The now available open
corpus contains sensory signal of: video, audio, physiology (SCL,
respiration, BVP, EMG Corrugator supercilii, EMG Zygomaticus
Major) and facial reactions annotations.
Abstract: This paper deals with automatic sentence modality
recognition in French. In this work, only prosodic features are
considered. The sentences are recognized according to the three
following modalities: declarative, interrogative and exclamatory
sentences. This information will be used to animate a talking head for
deaf and hearing-impaired children. We first statistically study a real
radio corpus in order to assess the feasibility of the automatic
modeling of sentence types. Then, we test two sets of prosodic
features as well as two different classifiers and their combination. We
further focus our attention on questions recognition, as this modality
is certainly the most important one for the target application.