Abstract: The concept of sacred and nature have long been
interlinked. Various cultural aspects such as religion, faith, traditions
bring people closer to nature and the natural environment. Memorial
Parks and Sacred Groves are examples of two such cultural
landscapes that exist today. The project mainly deals with the
significance of such sites to the environment and the deep rooted
significance it has to the people. These parks and groves play an
important role in biodiversity conservation and environmental
protection. There are many differences between the establishment of
memorial parks and sacred groves, but the underlying significance is
the same. Sentiments, emotions play an important role in landscape
planning and management. Hence the people and communities living
at these sites need to be involved in any planning activity or
decisions. The conservation of the environment should appeal to the
sentiments of the people; the need to be 'with nature' should be used
in the setting up of memorial forests and in the preservation of sacred
groves.
Abstract: Face and facial expressions play essential roles in
interpersonal communication. Most of the current works on the facial
expression recognition attempt to recognize a small set of the
prototypic expressions such as happy, surprise, anger, sad, disgust
and fear. However the most of the human emotions are
communicated by changes in one or two of discrete features. In this
paper, we develop a facial expressions synthesis system, based on the
facial characteristic points (FCP's) tracking in the frontal image
sequences. Selected FCP's are automatically tracked using a crosscorrelation
based optical flow. The proposed synthesis system uses a
simple deformable facial features model with a few set of control
points that can be tracked in original facial image sequences.
Abstract: There has been a growing interest in implementing humanoid avatars in networked virtual environment. However, most existing avatar communication systems do not take avatars- social backgrounds into consideration. This paper proposes a novel humanoid avatar animation system to represent personalities and facial emotions of avatars based on culture, profession, mood, age, taste, and so forth. We extract semantic keywords from the input text through natural language processing, and then the animations of personalized avatars are retrieved and displayed according to the order of the keywords. Our primary work is focused on giving avatars runtime instruction from multiple natural languages. Experiments with Chinese, Japanese and English input based on the prototype show that interactive avatar animations can be displayed in real time and be made available online. This system provides a more natural and interesting means of human communication, and therefore is expected to be used for cross-cultural communication, multiuser online games, and other entertainment applications.
Abstract: Contrary to negative emotion regulation, coping with
positive moods have received less attention in adolescent adjustment.
However, some research has found that everyone is different on
dealing with their positive emotions, which affects their adaptation
and well-being. The purpose of the present study was to investigate
the relationship between positive emotions dampening and
internalizing behavior problems of adolescent in Taiwan. A survey
was conducted and 208 students (12 to14 years old) completed the
strengths and difficulties questionnaire (SDQ), the Affect Intensity
Measure, and the positive emotions dampening scale. Analysis
methods such as descriptive statistics, t-test, Pearson correlations and
multiple regression were adapted. The results were as follows:
Emotionality and internalizing problem behavior have significant
gender differences. Compared to boys, girls have a higher score on
negative emotionality and are at a higher risk for internalizing
symptoms. However, there are no gender differences on positive
emotion dampening. Additionally, in the circumstance that negative
emotionality acted as the control variable, positive emotion
dampening strategy was (positive) related to internalizing behavior
problems. Given the results of this study, it is suggested that coaching
deconstructive positive emotion strategies is to assist adolescents
with internalizing behavior problems is encouraged.
Abstract: Emotions are related with learning processes and
physiological signals can be used to detect them for the
personalization of learning resources and to control the pace of
instruction. A model of relevant emotions has been developed, where
specific combinations of emotions and cognition processes are
connected and integrated with the concept of 'flow', in order to
improve learning. The cardiac pulse is a reliable signal that carries
useful information about the subject-s emotional condition; it is
detected using a classroom chair adapted with non invasive EMFi
sensor and an acquisition system that generates a ballistocardiogram
(BCG), the signal is processed by an algorithm to obtain
characteristics that match a specific emotional condition. The
complete chair system is presented in this work, along with a
framework for the personalization of learning resources.
Abstract: Emotion recognition is an important research field that finds lots of applications nowadays. This work emphasizes on recognizing different emotions from speech signal. The extracted features are related to statistics of pitch, formants, and energy contours, as well as spectral, perceptual and temporal features, jitter, and shimmer. The Artificial Neural Networks (ANN) was chosen as the classifier. Working on finding a robust and fast ANN classifier suitable for different real life application is our concern. Several experiments were carried out on different ANN to investigate the different factors that impact the classification success rate. Using a database containing 7 different emotions, it will be shown that with a proper and careful adjustment of features format, training data sorting, number of features selected and even the ANN type and architecture used, a success rate of 85% or even more can be achieved without increasing the system complicity and the computation time
Abstract: This paper investigates how the use of machine learning techniques can significantly predict the three major dimensions of learner-s emotions (pleasure, arousal and dominance) from brainwaves. This study has adopted an experimentation in which participants were exposed to a set of pictures from the International Affective Picture System (IAPS) while their electrical brain activity was recorded with an electroencephalogram (EEG). The pictures were already rated in a previous study via the affective rating system Self-Assessment Manikin (SAM) to assess the three dimensions of pleasure, arousal, and dominance. For each picture, we took the mean of these values for all subjects used in this previous study and associated them to the recorded brainwaves of the participants in our study. Correlation and regression analyses confirmed the hypothesis that brainwave measures could significantly predict emotional dimensions. This can be very useful in the case of impassive, taciturn or disabled learners. Standard classification techniques were used to assess the reliability of the automatic detection of learners- three major dimensions from the brainwaves. We discuss the results and the pertinence of such a method to assess learner-s emotions and integrate it into a brainwavesensing Intelligent Tutoring System.
Abstract: The purpose of this study is to explore how the emotions at the moment of conflict escalation are expressed nonverbally and how it can be detected by the parties involved in the conflicting situation. The study consists of two parts, in the first part it starts with the definition of "conflict" and "nonverbal communication". Further it includes the analysis of emotions and types of emotions, which may bring to the conflict escalation. Four types of emotions and emotion constructs are analyzed, particularly fear, anger, guilt and frustration. The second part of the study analyses the general role of nonverbal behavior in interaction and communication, which information it may give during communication to the person, who sends or receives those signals. The study finishes with the analysis of the nonverbal expression of analyzed emotions and on how it can be used during interaction.
Abstract: In two studies we challenged the well consolidated
position in regret literature according to which the necessary
condition for the emergence of regret is a bad outcome ensuing from
free decisions. Without free choice, and, consequently, personal
responsibility, other emotions, such as disappointment, but not regret,
are supposed to be elicited. In our opinion, a main source of regret is
being obliged by circumstance out of our control to chose an
undesired option. We tested the hypothesis that regret resulting from
a forced choice is more intense than regret derived from a free choice
and that the outcome affects the latter, not the former. Besides, we
investigated whether two other variables – the perception of the level
of freedom of the choice and the choice justifiability – mediated the
relationships between choice and regret, as well as the other four
emotions we examined: satisfaction, anger toward oneself,
disappointment, anger towards circumstances. The two studies were
based on the scenario methodology and implied a 2 x 2 (choice x
outcome) between design. In the first study the foreseen short-term
effects of the choice were assessed; in the second study the
experienced long-term effects of the choice were assessed. In each
study 160 students of the Second University of Naples participated.
Results largely corroborated our hypotheses. They were discussed in
the light of the main theories on regret and decision making.
Abstract: The purpose of this study is to determine the
circumstances affecting elementary school students in their family
and school lives and what kind of emotions children may feel
because of these circumstances. The study was carried out according
to the survey model. Four Turkish elementary schools provided 123
fourth grade students for participation in the study. The study-s data
were collected by using worksheets for the activity titled “Important
Days in Our Lives", which was part of the Elementary School Social
Sciences Course 4th Grade Education Program. Data analysis was
carried out according to the content analysis technique used in
qualitative research. The study detected that circumstances of their
family and school lives caused children to feel emotions such as
happiness, sadness, anger, fear and jealousy. The circumstances and
the emotions caused by these circumstances were analyzed according
to gender and interpreted by presenting them with their frequencies.
Abstract: Facial recognition and expression analysis is rapidly
becoming an area of intense interest in computer science and humancomputer
interaction design communities. The most expressive way
humans display emotions is through facial expressions. In this paper
skin and non-skin pixels were separated. Face regions were extracted
from the detected skin regions. Facial expressions are analyzed from
facial images by applying Gabor wavelet transform (GWT) and
Discrete Cosine Transform (DCT) on face images. Radial Basis
Function (RBF) Network is used to identify the person and to classify
the facial expressions. Our method reliably works even with faces,
which carry heavy expressions.
Abstract: It is hard to express emotion through only speech when
we watch a character in a movie or a play because we cannot estimate
the size, kind, and quantity of emotion. So this paper proposes an
artificial emotion model for visualizing current emotion with color and
location in emotion model. The artificial emotion model is designed
considering causality of generated emotion, difference of personality,
difference of continual emotional stimulus, and co-relation of various
emotions. This paper supposed the Emotion Field for visualizing
current emotion with location, and current emotion is expressed by
location and color in the Emotion Field. For visualizing changes
within current emotion, the artificial emotion model is adjusted to
characters in Hamlet.
Abstract: An emotional speech recognition system for the
applications on smart phones was proposed in this study to combine
with 3G mobile communications and social networks to provide users
and their groups with more interaction and care. This study developed
a mechanism using the support vector machines (SVM) to recognize
the emotions of speech such as happiness, anger, sadness and normal.
The mechanism uses a hierarchical classifier to adjust the weights of
acoustic features and divides various parameters into the categories of
energy and frequency for training. In this study, 28 commonly used
acoustic features including pitch and volume were proposed for
training. In addition, a time-frequency parameter obtained by
continuous wavelet transforms was also used to identify the accent and
intonation in a sentence during the recognition process. The Berlin
Database of Emotional Speech was used by dividing the speech into
male and female data sets for training. According to the experimental
results, the accuracies of male and female test sets were increased by
4.6% and 5.2% respectively after using the time-frequency parameter
for classifying happy and angry emotions. For the classification of all
emotions, the average accuracy, including male and female data, was
63.5% for the test set and 90.9% for the whole data set.
Abstract: Avoiding learning failures in mathematics e-learning environments caused by emotional problems in students with autism has become an important topic for combining of special education with information and communications technology. This study presents an adaptive emotional adjustment model in mathematics e-learning for students with autism, emphasizing the lack of emotional perception in mathematics e-learning systems. In addition, an emotion classification for students with autism was developed by inducing emotions in mathematical learning environments to record changes in the physiological signals and facial expressions of students. Using these methods, 58 emotional features were obtained. These features were then processed using one-way ANOVA and information gain (IG). After reducing the feature dimension, methods of support vector machines (SVM), k-nearest neighbors (KNN), and classification and regression trees (CART) were used to classify four emotional categories: baseline, happy, angry, and anxious. After testing and comparisons, in a situation without feature selection, the accuracy rate of the SVM classification can reach as high as 79.3-%. After using IG to reduce the feature dimension, with only 28 features remaining, SVM still has a classification accuracy of 78.2-%. The results of this research could enhance the effectiveness of eLearning in special education.
Abstract: The study of the interaction between humans and
computers has been emerging during the last few years. This
interaction will be more powerful if computers are able to perceive
and respond to human nonverbal communication such as emotions. In
this study, we present the image-based approach to emotion
classification through lower facial expression. We employ a set of
feature points in the lower face image according to the particular face
model used and consider their motion across each emotive expression
of images. The vector of displacements of all feature points input to
the Adaptive Support Vector Machines (A-SVMs) classifier that
classify it into seven basic emotions scheme, namely neutral, angry,
disgust, fear, happy, sad and surprise. The system was tested on the
Japanese Female Facial Expression (JAFFE) dataset of frontal view
facial expressions [7]. Our experiments on emotion classification
through lower facial expressions demonstrate the robustness of
Adaptive SVM classifier and verify the high efficiency of our
approach.
Abstract: Detection of human emotions has many potential applications. One of application is to quantify attentiveness audience in order evaluate acoustic quality in concern hall. The subjective audio preference that based on from audience is used. To obtain fairness evaluation of acoustic quality, the research proposed system for multimodal emotion detection; one modality based on brain signals that measured using electroencephalogram (EEG) and the second modality is sequences of facial images. In the experiment, an audio signal was customized which consist of normal and disorder sounds. Furthermore, an audio signal was played in order to stimulate positive/negative emotion feedback of volunteers. EEG signal from temporal lobes, i.e. T3 and T4 was used to measured brain response and sequence of facial image was used to monitoring facial expression during volunteer hearing audio signal. On EEG signal, feature was extracted from change information in brain wave, particularly in alpha and beta wave. Feature of facial expression was extracted based on analysis of motion images. We implement an advance optical flow method to detect the most active facial muscle form normal to other emotion expression that represented in vector flow maps. The reduce problem on detection of emotion state, vector flow maps are transformed into compass mapping that represents major directions and velocities of facial movement. The results showed that the power of beta wave is increasing when disorder sound stimulation was given, however for each volunteer was giving different emotion feedback. Based on features derived from facial face images, an optical flow compass mapping was promising to use as additional information to make decision about emotion feedback.
Abstract: The aim of the research is to understand whether the accuracy of customer detection of employee emotional labor strategy would influence the overall service satisfaction. From path analysis, it was found that employee-s positive emotions positively influenced service quality. Service quality in turn influenced Customer detection of employee emotional deep action strategy and Customer detection of employee emotional surface action strategy. Lastly, Customer detection of employee emotional deep action strategy and Customer detection of employee emotional surface action strategy positively influenced service satisfaction. Based on the analysis results, suggestions are proposed to provide reference for human resource management and use in relative fields.
Abstract: Chronic conditions carry with them strong emotions
and often lead to charged relationships between patients and their
health providers and, by extension, patients and health researchers.
Persons are both autonomous and relational and a purely cognitive
model of autonomy neglects the social and relational basis of chronic
illness. Ensuring genuine informed consent in research requires a
thorough understanding of how participants perceive a study and
their reasons for participation. Surveys may not capture the
complexities of reasoning that underlies study participation.
Contradictory reasons for participation, for instance an initial claim
of altruism as rationale and a subsequent claim of personal benefit
(therapeutic misconception), affect the quality of informed consent.
Individuals apply principles through the filter of personal values and
lived experience. Authentic autonomy, and hence authentic consent
to research, occurs within the context of patients- unique life
narratives and illness experiences.
Abstract: Computer animation is a widely adopted technique used to specify the movement of various objects on screen. The key issue of this technique is the specification of motion. Motion Control Methods are such methods which are used to specify the actions of objects. This paper discusses the various types of motion control methods with special focus on behavioral animation. A behavioral model is also proposed which takes into account the emotions and perceptions of an actor which in turn generate its behavior. This model makes use of an expert system to generate tasks for the actors which specify the actions to be performed in the virtual environment.
Abstract: One of the popular methods for recognition of facial
expressions such as happiness, sadness and surprise is based on
deformation of facial features. Motion vectors which show these
deformations can be specified by the optical flow. In this method, for
detecting emotions, the resulted set of motion vectors are compared
with standard deformation template that caused by facial expressions.
In this paper, a new method is introduced to compute the quantity of
likeness in order to make decision based on the importance of
obtained vectors from an optical flow approach. For finding the
vectors, one of the efficient optical flow method developed by
Gautama and VanHulle[17] is used. The suggested method has been
examined over Cohn-Kanade AU-Coded Facial Expression Database,
one of the most comprehensive collections of test images available.
The experimental results show that our method could correctly
recognize the facial expressions in 94% of case studies. The results
also show that only a few number of image frames (three frames) are
sufficient to detect facial expressions with rate of success of about
83.3%. This is a significant improvement over the available methods.