Emotions and Message Sharing on the Chinese Microblog

The study aims to explore microblog users’ emotion expression and sharing behaviors on the Chinese microblog (Weibo). The first theme of study analyzed whether microblog emotions impact readers’ message sharing behaviors, specifically, how the strength of emotion (positive and negative) in microblog messages facilitate/inhibit readers’ sharing behaviors. The second theme compared the differences among the three types of microblog users (i.e., verified enterprise users, verified individual users and unverified users) in terms of their profiles and microblog behaviors. A total of 7114 microblog messages about 24 hot public events in China were sampled from Sina Weibo. The first study results show that strength of negative emotions that microblog messages carry significantly increase the possibility of the message being shared. The second study results indicate that there are significant differences across the three types of users in terms of their emotion expression and its influence on microblog behaviors.

Development System for Emotion Detection Based on Brain Signals and Facial Images

Detection of human emotions has many potential applications. One of application is to quantify attentiveness audience in order evaluate acoustic quality in concern hall. The subjective audio preference that based on from audience is used. To obtain fairness evaluation of acoustic quality, the research proposed system for multimodal emotion detection; one modality based on brain signals that measured using electroencephalogram (EEG) and the second modality is sequences of facial images. In the experiment, an audio signal was customized which consist of normal and disorder sounds. Furthermore, an audio signal was played in order to stimulate positive/negative emotion feedback of volunteers. EEG signal from temporal lobes, i.e. T3 and T4 was used to measured brain response and sequence of facial image was used to monitoring facial expression during volunteer hearing audio signal. On EEG signal, feature was extracted from change information in brain wave, particularly in alpha and beta wave. Feature of facial expression was extracted based on analysis of motion images. We implement an advance optical flow method to detect the most active facial muscle form normal to other emotion expression that represented in vector flow maps. The reduce problem on detection of emotion state, vector flow maps are transformed into compass mapping that represents major directions and velocities of facial movement. The results showed that the power of beta wave is increasing when disorder sound stimulation was given, however for each volunteer was giving different emotion feedback. Based on features derived from facial face images, an optical flow compass mapping was promising to use as additional information to make decision about emotion feedback.