Abstract: Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.
Abstract: One of the popular methods for recognition of facial
expressions such as happiness, sadness and surprise is based on
deformation of facial features. Motion vectors which show these
deformations can be specified by the optical flow. In this method, for
detecting emotions, the resulted set of motion vectors are compared
with standard deformation template that caused by facial expressions.
In this paper, a new method is introduced to compute the quantity of
likeness in order to make decision based on the importance of
obtained vectors from an optical flow approach. For finding the
vectors, one of the efficient optical flow method developed by
Gautama and VanHulle[17] is used. The suggested method has been
examined over Cohn-Kanade AU-Coded Facial Expression Database,
one of the most comprehensive collections of test images available.
The experimental results show that our method could correctly
recognize the facial expressions in 94% of case studies. The results
also show that only a few number of image frames (three frames) are
sufficient to detect facial expressions with rate of success of about
83.3%. This is a significant improvement over the available methods.
Abstract: In this paper we present an approach for 3D face
recognition based on extracting principal components of range
images by utilizing modified PCA methods namely 2DPCA and
bidirectional 2DPCA also known as (2D) 2 PCA.A preprocessing
stage was implemented on the images to smooth them using median
and Gaussian filtering. In the normalization stage we locate the nose
tip to lay it at the center of images then crop each image to a standard
size of 100*100. In the face recognition stage we extract the principal
component of each image using both 2DPCA and (2D) 2 PCA.
Finally, we use Euclidean distance to measure the minimum distance
between a given test image to the training images in the database. We
also compare the result of using both methods. The best result
achieved by experiments on a public face database shows that 83.3
percent is the rate of face recognition for a random facial expression.
Abstract: A talking head system (THS) is presented to animate
the face of a speaking 3D avatar in such a way that it realistically
pronounces the given Korean text. The proposed system consists of
SAPI compliant text-to-speech (TTS) engine and MPEG-4 compliant
face animation generator. The input to the THS is a unicode text that is
to be spoken with synchronized lip shape. The TTS engine generates a
phoneme sequence with their duration and audio data. The TTS
applies the coarticulation rules to the phoneme sequence and sends a
mouth animation sequence to the face modeler. The proposed THS can
make more natural lip sync and facial expression by using the face
animation generator than those using the conventional visemes only.
The experimental results show that our system has great potential for
the implementation of talking head for Korean text.
Abstract: In modern human computer interaction systems
(HCI), emotion recognition is becoming an imperative characteristic.
The quest for effective and reliable emotion recognition in HCI has
resulted in a need for better face detection, feature extraction and
classification. In this paper we present results of feature space analysis
after briefly explaining our fully automatic vision based emotion
recognition method. We demonstrate the compactness of the feature
space and show how the 2d/3d based method achieves superior features
for the purpose of emotion classification. Also it is exposed that
through feature normalization a widely person independent feature
space is created. As a consequence, the classifier architecture has
only a minor influence on the classification result. This is particularly
elucidated with the help of confusion matrices. For this purpose
advanced classification algorithms, such as Support Vector Machines
and Artificial Neural Networks are employed, as well as the simple k-
Nearest Neighbor classifier.
Abstract: The scope of this research was to study the relation between the facial expressions of three lecturers in a real academic lecture theatre and the reactions of the students to those expressions. The first experiment aimed to investigate the effectiveness of a virtual lecturer-s expressions on the students- learning outcome in a virtual pedagogical environment. The second experiment studied the effectiveness of a single facial expression, i.e. the smile, on the students- performance. Both experiments involved virtual lectures, with virtual lecturers teaching real students. The results suggest that the students performed better by 86%, in the lectures where the lecturer performed facial expressions compared to the results of the lectures that did not use facial expressions. However, when simple or basic information was used, the facial expressions of the virtual lecturer had no substantial effect on the students- learning outcome. Finally, the appropriate use of smiles increased the interest of the students and consequently their performance.
Abstract: Facial expression analysis is rapidly becoming an
area of intense interest in computer science and human-computer
interaction design communities. The most expressive way humans
display emotions is through facial expressions. In this paper we
present a method to analyze facial expression from images by
applying Gabor wavelet transform (GWT) and Discrete Cosine
Transform (DCT) on face images. Radial Basis Function (RBF)
Network is used to classify the facial expressions. As a second stage,
the images are preprocessed to enhance the edge details and non
uniform down sampling is done to reduce the computational
complexity and processing time. Our method reliably works even
with faces, which carry heavy expressions.
Abstract: The study explored varied types of human smiles and
extracted most of the key factors affecting the smiles. These key
factors then were converted into a set of control points which could
serve to satisfy the needs for creation of facial expression for 3D
animators and be further applied to the face simulation for robots in the
future. First, hundreds of human smile pictures were collected and
analyzed to identify the key factors for face expression. Then, the
factors were converted into a set of control points and sizing
parameters calculated proportionally. Finally, two different faces
were constructed for validating the parameters via the process of
simulating smiles of the same type as the original one.