Abstract: People with speech disorders may rely on augmentative
and alternative communication (AAC) technologies to help them
communicate. However, the limitations of the current AAC
technologies act as barriers to the optimal use of these technologies in
daily communication settings. The ability to communicate effectively
relies on a number of factors that are not limited to the intelligibility
of the spoken words. In fact, non-verbal cues play a critical role in
the correct comprehension of messages and having to rely on verbal
communication only, as is the case with current AAC technology,
may contribute to problems in communication. This is especially true
for people’s ability to express their feelings and emotions, which are
communicated to a large part through non-verbal cues. This paper
focuses on understanding more about the non-verbal communication
ability of people with dysarthria, with the overarching aim of this
research being to improve AAC technology by allowing people
with dysarthria to better communicate emotions. Preliminary survey
results are presented that gives an understanding of how people with
dysarthria convey emotions, what emotions that are important for
them to get across, what emotions that are difficult for them to convey,
and whether there is a difference in communicating emotions when
speaking to familiar versus unfamiliar people.
Abstract: The number of school-aged children with autism in Indonesia has been increasing each year. Autism is a developmental disorder which can be diagnosed in childhood. One of the symptoms is the lack of communication skills. Music therapy is known as an effective treatment for children with autism. Music elements and structures create a good space for children with autism to express their feelings and communicate their thoughts. School-aged children are expected to be able to communicate non-verbally very well, but children with autism experience the difficulties of communicating non-verbally. The aim of this research is to analyze the significance of music therapy treatment to improve non-verbal communication tools for children with autism. This research informs teachers and parents on how music can be used as a media to communicate with children with autism. The qualitative method is used to analyze this research, while the result is described with the microanalysis technique. The result is measured specifically from the whole experiment, hours of every week, minutes of every session, and second of every moment. The samples taken are four school-aged children with autism in the age range of six to 11 years old. This research is conducted within four months started with observation, interview, literature research, and direct experiment. The result demonstrates that music therapy could be effectively used as a non-verbal communication tool for children with autism, such as changes of body gesture, eye contact, and facial expression.
Abstract: People express emotions through different modalities.
Integration of verbal and non-verbal communication channels creates
a system in which the message is easier to understand. Expanding
the focus to several expression forms can facilitate research on
emotion recognition as well as human-machine interaction. In this
article, the authors present a Polish emotional database composed of
three modalities: facial expressions, body movement and gestures,
and speech. The corpora contains recordings registered in studio
conditions, acted out by 16 professional actors (8 male and 8 female).
The data is labeled with six basic emotions categories, according to
Ekman’s emotion categories. To check the quality of performance,
all recordings are evaluated by experts and volunteers. The database
is available to academic community and might be useful in the study
on audio-visual emotion recognition.
Abstract: This paper reports on a project to integrate Japanese (as a first language) and English (as a second language) education. This study focuses on the mutual effects of the two languages on the linguistic proficiency of elementary school students. The research team consisted of elementary school teachers and researchers at a university. The participants of the experiment were students between 3rd and 6th grades at an elementary school. The research process consisted of seven steps: 1) specifying linguistic proficiency; 2) developing the cross-curriculum of L1 and L2; 3) forming can-do statements; 4) creating a self-evaluation questionnaire; 5) executing the self-evaluation questionnaire at the beginning of the school year; 6) instructing L1 and L2 based on the curriculum; and 7) executing the self-evaluation questionnaire at the beginning of the next school year. In Step 1, the members of the research team brainstormed ways to specify elementary school students’ linguistic proficiency that can be observed in various scenes. It was revealed that the teachers evaluate their students’ linguistic proficiency on the basis of the students’ utterances, but also informed by their non-verbal communication abilities. This led to the idea that competency for understanding others’ minds through the use of physical movement or bodily senses in communication in L1 – to sympathize with others – can be transferred to that same competency in communication in L2. Based on the specification of linguistic proficiency that L1 and L2 have in common, a cross-curriculum of L1 and L2 was developed in Step 2. In Step 3, can-do statements based on the curriculum were also formed, building off of the action-oriented approach from the Common European Framework of Reference for Languages (CEFR) used in Europe. A self-evaluation questionnaire consisting of the main can-do statements was given to the students between 3rd grade and 6th grade at the beginning of the school year (Step 4 and Step 5), and all teachers gave L1 and L2 instruction based on the curriculum to the students for one year (Step 6). The same questionnaire was given to the students at the beginning of the next school year (Step 7). The results of statistical analysis proved the enhancement of the students’ linguistic proficiency. This verified the validity of developing the cross-curriculum of L1 and L2 and adapting it in elementary school. It was concluded that elementary school students do not distinguish between L1 and L2, and that they just try to understand others’ minds through physical movement or senses in any language.
Abstract: Hand gesture is one of the typical methods used in
sign language for non-verbal communication. It is most commonly
used by people who have hearing or speech problems to
communicate among themselves or with normal people. Various sign
language systems have been developed by manufacturers around the
globe but they are neither flexible nor cost-effective for the end
users. This paper presents a system prototype that is able to
automatically recognize sign language to help normal people to
communicate more effectively with the hearing or speech impaired
people. The Sign to Voice system prototype, S2V, was developed
using Feed Forward Neural Network for two-sequence signs
detection. Different sets of universal hand gestures were captured
from video camera and utilized to train the neural network for
classification purpose. The experimental results have shown that
neural network has achieved satisfactory result for sign-to-voice
translation.