Abstract: Human faces, as important visual signals, express a significant amount of nonverbal info for usage in human-to-human communication. Age, specifically, is more significant among these properties. Human age estimation using facial image analysis as an automated method which has numerous potential real‐world applications. In this paper, an automated age estimation framework is presented. Support Vector Regression (SVR) strategy is utilized to investigate age prediction. This paper depicts a feature extraction taking into account Gray Level Co-occurrence Matrix (GLCM), which can be utilized for robust face recognition framework. It applies GLCM operation to remove the face's features images and Active Appearance Models (AAMs) to assess the human age based on image. A fused feature technique and SVR with GA optimization are proposed to lessen the error in age estimation.
Abstract: In the deep south of Thailand, checkpoints for people
verification are necessary for the security management of risk zones,
such as official buildings in the conflict area. In this paper, we
propose an automatic checkpoint system that verifies persons using
information from ID cards and facial features. The methods for a
person’s information abstraction and verification are introduced
based on useful information such as ID number and name, extracted
from official cards, and facial images from videos. The proposed
system shows promising results and has a real impact on the local
society.
Abstract: This paper presents two techniques, local feature
extraction using image spectrum and low frequency spectrum
modelling using GMM to capture the underlying statistical
information to improve the performance of face recognition
system. Local spectrum features are extracted using overlap sub
block window that are mapped on the face image. For each of this
block, spatial domain is transformed to frequency domain using
DFT. A low frequency coefficient is preserved by discarding high
frequency coefficients by applying rectangular mask on the
spectrum of the facial image. Low frequency information is non-
Gaussian in the feature space and by using combination of several
Gaussian functions that has different statistical properties, the best
feature representation can be modelled using probability density
function. The recognition process is performed using maximum
likelihood value computed using pre-calculated GMM components.
The method is tested using FERET datasets and is able to achieved
92% recognition rates.
Abstract: Human face has a fundamental role in the appearance
of individuals. So the importance of facial surgeries is undeniable.
Thus, there is a need for the appropriate and accurate facial skin
segmentation in order to extract different features. Since Fuzzy CMeans
(FCM) clustering algorithm doesn’t work appropriately for
noisy images and outliers, in this paper we exploit Possibilistic CMeans
(PCM) algorithm in order to segment the facial skin. For this
purpose, first, we convert facial images from RGB to YCbCr color
space. To evaluate performance of the proposed algorithm, the
database of Sahand University of Technology, Tabriz, Iran was used.
In order to have a better understanding from the proposed algorithm;
FCM and Expectation-Maximization (EM) algorithms are also used
for facial skin segmentation. The proposed method shows better
results than the other segmentation methods. Results include
misclassification error (0.032) and the region’s area error (0.045) for
the proposed algorithm.
Abstract: Face and facial expressions play essential roles in
interpersonal communication. Most of the current works on the facial
expression recognition attempt to recognize a small set of the
prototypic expressions such as happy, surprise, anger, sad, disgust
and fear. However the most of the human emotions are
communicated by changes in one or two of discrete features. In this
paper, we develop a facial expressions synthesis system, based on the
facial characteristic points (FCP's) tracking in the frontal image
sequences. Selected FCP's are automatically tracked using a crosscorrelation
based optical flow. The proposed synthesis system uses a
simple deformable facial features model with a few set of control
points that can be tracked in original facial image sequences.
Abstract: Each year many people are reported missing in most of the countries in the world owing to various reasons. Arrangements have to be made to find these people after some time. So the investigating agencies are compelled to make out these people by using manpower. But in many cases, the investigations carried out to find out an absconding for a long time may not be successful. At a time like that it may be difficult to identify these people by examining their old photographs, because their facial appearance might have changed mainly due to the natural aging process. On some occasions in forensic medicine if a dead body is found, investigations should be held to make sure that this corpse belongs to the same person disappeared some time ago. With the passage of time the face of the person might have changed and there should be a mechanism to reveal the person-s identity. In order to make this process easy, we must guess and decide as to how he will look like by now. To address this problem this paper presents a way of synthesizing a facial image with the aging effects.
Abstract: Rotation or tilt present in an image capture by digital
means can be detected and corrected using Artificial Neural Network
(ANN) for application with a Face Recognition System (FRS). Principal
Component Analysis (PCA) features of faces at different angles
are used to train an ANN which detects the rotation for an input image
and corrected using a set of operations implemented using another
system based on ANN. The work also deals with the recognition
of human faces with features from the foreheads, eyes, nose and
mouths as decision support entities of the system configured using
a Generalized Feed Forward Artificial Neural Network (GFFANN).
These features are combined to provide a reinforced decision for
verification of a person-s identity despite illumination variations. The
complete system performing facial image rotation detection, correction
and recognition using re-enforced decision support provides a
success rate in the higher 90s.
Abstract: Facial expression analysis plays a significant role for
human computer interaction. Automatic analysis of human facial
expression is still a challenging problem with many applications. In
this paper, we propose neuro-fuzzy based automatic facial expression
recognition system to recognize the human facial expressions like
happy, fear, sad, angry, disgust and surprise. Initially facial image is
segmented into three regions from which the uniform Local Binary
Pattern (LBP) texture features distributions are extracted and
represented as a histogram descriptor. The facial expressions are
recognized using Multiple Adaptive Neuro Fuzzy Inference System
(MANFIS). The proposed system designed and tested with JAFFE
face database. The proposed model reports 94.29% of classification
accuracy.
Abstract: Nowadays, driving support systems, such as car
navigation systems, are getting common, and they support drivers in
several aspects. It is important for driving support systems to detect
status of driver's consciousness. Particularly, detecting driver's
drowsiness could prevent drivers from collisions caused by drowsy
driving. In this paper, we discuss the various artificial detection
methods for detecting driver's drowsiness processing technique. This
system is based on facial images analysis for warning the driver of
drowsiness or in attention to prevent traffic accidents.
Abstract: The segmentation of mouth and lips is a fundamental
problem in facial image analyisis. In this paper we propose a method
for lip segmentation based on rg-color histogram. Statistical analysis
shows, using the rg-color-space is optimal for this purpose of a pure
color based segmentation. Initially a rough adaptive threshold selects
a histogram region, that assures that all pixels in that region are
skin pixels. Based on that pixels we build a gaussian model which
represents the skin pixels distribution and is utilized to obtain a
refined, optimal threshold. We are not incorporating shape or edge
information. In experiments we show the performance of our lip pixel
segmentation method compared to the ground truth of our dataset and
a conventional watershed algorithm.
Abstract: Facial recognition and expression analysis is rapidly
becoming an area of intense interest in computer science and humancomputer
interaction design communities. The most expressive way
humans display emotions is through facial expressions. In this paper
skin and non-skin pixels were separated. Face regions were extracted
from the detected skin regions. Facial expressions are analyzed from
facial images by applying Gabor wavelet transform (GWT) and
Discrete Cosine Transform (DCT) on face images. Radial Basis
Function (RBF) Network is used to identify the person and to classify
the facial expressions. Our method reliably works even with faces,
which carry heavy expressions.
Abstract: Detection of human emotions has many potential applications. One of application is to quantify attentiveness audience in order evaluate acoustic quality in concern hall. The subjective audio preference that based on from audience is used. To obtain fairness evaluation of acoustic quality, the research proposed system for multimodal emotion detection; one modality based on brain signals that measured using electroencephalogram (EEG) and the second modality is sequences of facial images. In the experiment, an audio signal was customized which consist of normal and disorder sounds. Furthermore, an audio signal was played in order to stimulate positive/negative emotion feedback of volunteers. EEG signal from temporal lobes, i.e. T3 and T4 was used to measured brain response and sequence of facial image was used to monitoring facial expression during volunteer hearing audio signal. On EEG signal, feature was extracted from change information in brain wave, particularly in alpha and beta wave. Feature of facial expression was extracted based on analysis of motion images. We implement an advance optical flow method to detect the most active facial muscle form normal to other emotion expression that represented in vector flow maps. The reduce problem on detection of emotion state, vector flow maps are transformed into compass mapping that represents major directions and velocities of facial movement. The results showed that the power of beta wave is increasing when disorder sound stimulation was given, however for each volunteer was giving different emotion feedback. Based on features derived from facial face images, an optical flow compass mapping was promising to use as additional information to make decision about emotion feedback.
Abstract: This paper proposes a hybrid method for eyes localization
in facial images. The novelty is in combining techniques
that utilise colour, edge and illumination cues to improve accuracy.
The method is based on the observation that eye regions have dark
colour, high density of edges and low illumination as compared
to other parts of face. The first step in the method is to extract
connected regions from facial images using colour, edge density and
illumination cues separately. Some of the regions are then removed
by applying rules that are based on the general geometry and shape
of eyes. The remaining connected regions obtained through these
three cues are then combined in a systematic way to enhance the
identification of the candidate regions for the eyes. The geometry
and shape based rules are then applied again to further remove the
false eye regions. The proposed method was tested using images from
the PICS facial images database. The proposed method has 93.7%
and 87% accuracies for initial blobs extraction and final eye detection
respectively.
Abstract: The evaluation and measurement of human body
dimensions are achieved by physical anthropometry. This research
was conducted in view of the importance of anthropometric indices
of the face in forensic medicine, surgery, and medical imaging. The
main goal of this research is to optimization of facial feature point by
establishing a mathematical relationship among facial features and
used optimize feature points for age classification. Since selected
facial feature points are located to the area of mouth, nose, eyes and
eyebrow on facial images, all desire facial feature points are extracted
accurately. According this proposes method; sixteen Euclidean
distances are calculated from the eighteen selected facial feature
points vertically as well as horizontally. The mathematical
relationships among horizontal and vertical distances are established.
Moreover, it is also discovered that distances of the facial feature
follows a constant ratio due to age progression. The distances
between the specified features points increase with respect the age
progression of a human from his or her childhood but the ratio of the
distances does not change (d = 1 .618 ) . Finally, according to the
proposed mathematical relationship four independent feature
distances related to eight feature points are selected from sixteen
distances and eighteen feature point-s respectively. These four feature
distances are used for classification of age using Support Vector
Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm
and shown around 96 % accuracy. Experiment result shows the
proposed system is effective and accurate for age classification.
Abstract: A fusion classifier composed of two modules, one made by a hidden Markov model (HMM) and the other by a support vector machine (SVM), is proposed to recognize faces with pose variations in open-set recognition settings. The HMM module captures the evolution of facial features across a subject-s face using the subject-s facial images only, without referencing to the faces of others. Because of the captured evolutionary process of facial features, the HMM module retains certain robustness against pose variations, yielding low false rejection rates (FRR) for recognizing faces across poses. This is, however, on the price of poor false acceptance rates (FAR) when recognizing other faces because it is built upon withinclass samples only. The SVM module in the proposed model is developed following a special design able to substantially diminish the FAR and further lower down the FRR. The proposed fusion classifier has been evaluated in performance using the CMU PIE database, and proven effective for open-set face recognition with pose variations. Experiments have also shown that it outperforms the face classifier made by HMM or SVM alone.
Abstract: Choosing the right metadata is a critical, as good
information (metadata) attached to an image will facilitate its
visibility from a pile of other images. The image-s value is enhanced
not only by the quality of attached metadata but also by the technique
of the search. This study proposes a technique that is simple but
efficient to predict a single human image from a website using the
basic image data and the embedded metadata of the image-s content
appearing on web pages. The result is very encouraging with the
prediction accuracy of 95%. This technique may become a great
assist to librarians, researchers and many others for automatically and
efficiently identifying a set of human images out of a greater set of
images.