Abstract: Call centers have been expanding and they have influence on activation in various markets increasingly. A call center’s work is known as one of the most demanding and stressful jobs. In this paper, we propose the fatigue detection system in order to detect burnout of call center agents in the case of a neck pain and upper back pain. Our proposed system is based on the computer vision technique combined skin color detection with the Viola-Jones object detector. To recognize the gesture of hand poses caused by stress sign, the YCbCr color space is used to detect the skin color region including face and hand poses around the area related to neck ache and upper back pain. A cascade of clarifiers by Viola-Jones is used for face recognition to extract from the skin color region. The detection of hand poses is given by the evaluation of neck pain and upper back pain by using skin color detection and face recognition method. The system performance is evaluated using two groups of dataset created in the laboratory to simulate call center environment. Our call center agent burnout detection system has been implemented by using a web camera and has been processed by MATLAB. From the experimental results, our system achieved 96.3% for upper back pain detection and 94.2% for neck pain detection.
Abstract: There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.
Abstract: Automatic detection of facial feature points plays
an important role in applications such as facial feature tracking,
human-machine interaction and face recognition. The majority of
facial feature points detection methods using two-dimensional or
three-dimensional data are covered in existing survey papers. In
this article chosen approaches to the facial features detection have
been gathered and described. This overview focuses on the class
of researches exploiting facial feature points detection to represent
facial surface for two-dimensional or three-dimensional face. In
the conclusion, we discusses advantages and disadvantages of the
presented algorithms.
Abstract: A face recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame. A lot of algorithms have been proposed for face recognition. Vector Quantization (VQ) based face recognition is a novel approach for face recognition. Here a new codebook generation for VQ based face recognition using Integrated Adaptive Fuzzy Clustering (IAFC) is proposed. IAFC is a fuzzy neural network which incorporates a fuzzy learning rule into a competitive neural network. The performance of proposed algorithm is demonstrated by using publicly available AT&T database, Yale database, Indian Face database and a small face database, DCSKU database created in our lab. In all the databases the proposed approach got a higher recognition rate than most of the existing methods. In terms of Equal Error Rate (ERR) also the proposed codebook is better than the existing methods.
Abstract: In this work we present an efficient approach for face
recognition in the infrared spectrum. In the proposed approach
physiological features are extracted from thermal images in order to
build a unique thermal faceprint. Then, a distance transform is used
to get an invariant representation for face recognition. The obtained
physiological features are related to the distribution of blood vessels
under the face skin. This blood network is unique to each individual
and can be used in infrared face recognition. The obtained results are
promising and show the effectiveness of the proposed scheme.
Abstract: A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.
Abstract: The current research paper is an implementation of
Eigen Faces and Karhunen-Loeve Algorithm for face recognition.
The designed program works in a manner where a unique
identification number is given to each face under trial. These faces
are kept in a database from where any particular face can be matched
and found out of the available test faces. The Karhunen –Loeve
Algorithm has been implemented to find out the appropriate right
face (with same features) with respect to given input image as test
data image having unique identification number. The procedure
involves usage of Eigen faces for the recognition of faces.
Abstract: This paper reports a new pattern recognition approach for face recognition. The biological model of light receptors - cones and rods in human eyes and the way they are associated with pattern vision in human vision forms the basis of this approach. The functional model is simulated using CWD and WPD. The paper also discusses the experiments performed for face recognition using the features extracted from images in the AT & T face database. Artificial Neural Network and k- Nearest Neighbour classifier algorithms are employed for the recognition purpose. A feature vector is formed for each of the face images in the database and recognition accuracies are computed and compared using the classifiers. Simulation results show that the proposed method outperforms traditional way of feature extraction methods prevailing for pattern recognition in terms of recognition accuracy for face images with pose and illumination variations.
Abstract: Linearization of graph embedding has been emerged
as an effective dimensionality reduction technique in pattern
recognition. However, it may not be optimal for nonlinearly
distributed real world data, such as face, due to its linear nature. So, a
kernelization of graph embedding is proposed as a dimensionality
reduction technique in face recognition. In order to further boost the
recognition capability of the proposed technique, the Fisher-s
criterion is opted in the objective function for better data
discrimination. The proposed technique is able to characterize the
underlying intra-class structure as well as the inter-class separability.
Experimental results on FRGC database validate the effectiveness of
the proposed technique as a feature descriptor.
Abstract: Face recognition in the infrared spectrum has attracted a lot of interest in recent years. Many of the techniques used in infrared are based on their visible counterpart, especially linear techniques like PCA and LDA. In this work, we introduce a probabilistic Bayesian framework for face recognition in the infrared spectrum. In the infrared spectrum, variations can occur between face images of the same individual due to pose, metabolic, time changes, etc. Bayesian approaches permit to reduce intrapersonal variation, thus making them very interesting for infrared face recognition. This framework is compared with classical linear techniques. Non linear techniques we developed recently for infrared face recognition are also presented and compared to the Bayesian face recognition framework. A new approach for infrared face extraction based on SVM is introduced. Experimental results show that the Bayesian technique is promising and lead to interesting results in the infrared spectrum when a sufficient number of face images is used in an intrapersonal learning process.
Abstract: We have applied new accelerated algorithm for linear
discriminate analysis (LDA) in face recognition with support vector
machine. The new algorithm has the advantage of optimal selection
of the step size. The gradient descent method and new algorithm has
been implemented in software and evaluated on the Yale face
database B. The eigenfaces of these approaches have been used to
training a KNN. Recognition rate with new algorithm is compared
with gradient.
Abstract: In illumination variant face recognition, existing
methods extracting face albedo as light normalized image may lead to
loss of extensive facial details, with light template discarded. To
improve that, a novel approach for realistic facial texture
reconstruction by combining original image and albedo image is
proposed. First, light subspaces of different identities are established
from the given reference face images; then by projecting the original
and albedo image into each light subspace respectively, texture
reference images with corresponding lighting are reconstructed and
two texture subspaces are formed. According to the projections in
texture subspaces, facial texture with normal light can be synthesized.
Due to the combination of original image, facial details can be
preserved with face albedo. In addition, image partition is applied to
improve the synthesization performance. Experiments on Yale B and
CMUPIE databases demonstrate that this algorithm outperforms the
others both in image representation and in face recognition.
Abstract: Gabor-based face representation has achieved enormous success in face recognition. This paper addresses a novel algorithm for face recognition using neural networks trained by Gabor features. The system is commenced on convolving a face image with a series of Gabor filter coefficients at different scales and orientations. Two novel contributions of this paper are: scaling of rms contrast and introduction of fuzzily skewed filter. The neural network employed for face recognition is based on the multilayer perceptron (MLP) architecture with backpropagation algorithm and incorporates the convolution filter response of Gabor jet. The effectiveness of the algorithm has been justified over a face database with images captured at different illumination conditions.
Abstract: In this paper the problem of face recognition under variable illumination conditions is considered. Most of the works in the literature exhibit good performance under strictly controlled acquisition conditions, but the performance drastically drop when changes in pose and illumination occur, so that recently number of approaches have been proposed to deal with such variability. The aim of this work is to introduce an efficient local appearance feature extraction method based steerable pyramid (SP) for face recognition. Local information is extracted from SP sub-bands using LBP(Local binary Pattern). The underlying statistics allow us to reduce the required amount of data to be stored. The experiments carried out on different face databases confirm the effectiveness of the proposed approach.
Abstract: We introduce an algorithm based on the
morphological shared-weight neural network. Being nonlinear and
translation-invariant, the MSNN can be used to create better
generalization during face recognition. Feature extraction is
performed on grayscale images using hit-miss transforms that are
independent of gray-level shifts. The output is then learned by
interacting with the classification process. The feature extraction and
classification networks are trained together, allowing the MSNN to
simultaneously learn feature extraction and classification for a face.
For evaluation, we test for robustness under variations in gray levels
and noise while varying the network-s configuration to optimize
recognition efficiency and processing time. Results show that the
MSNN performs better for grayscale image pattern classification
than ordinary neural networks.
Abstract: This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.
Abstract: Because of increasing demands for security in today-s
society and also due to paying much more attention to machine
vision, biometric researches, pattern recognition and data retrieval in
color images, face detection has got more application. In this article
we present a scientific approach for modeling human skin color, and
also offer an algorithm that tries to detect faces within color images
by combination of skin features and determined threshold in the
model. Proposed model is based on statistical data in different color
spaces. Offered algorithm, using some specified color threshold, first,
divides image pixels into two groups: skin pixel group and non-skin
pixel group and then based on some geometric features of face
decides which area belongs to face.
Two main results that we received from this research are as follow:
first, proposed model can be applied easily on different databases and
color spaces to establish proper threshold. Second, our algorithm can
adapt itself with runtime condition and its results demonstrate
desirable progress in comparison with similar cases.
Abstract: In this paper, an efficient local appearance feature
extraction method based the multi-resolution Curvelet transform is
proposed in order to further enhance the performance of the well
known Linear Discriminant Analysis(LDA) method when applied
to face recognition. Each face is described by a subset of band
filtered images containing block-based Curvelet coefficients. These
coefficients characterize the face texture and a set of simple statistical
measures allows us to form compact and meaningful feature vectors.
The proposed method is compared with some related feature extraction
methods such as Principal component analysis (PCA), as well
as Linear Discriminant Analysis LDA, and independent component
Analysis (ICA). Two different muti-resolution transforms, Wavelet
(DWT) and Contourlet, were also compared against the Block Based
Curvelet-LDA algorithm. Experimental results on ORL, YALE and
FERET face databases convince us that the proposed method provides
a better representation of the class information and obtains much
higher recognition accuracies.
Abstract: In this paper, we propose a robust face relighting
technique by using spherical space properties. The proposed method
is done for reducing the illumination effects on face recognition.
Given a single 2D face image, we relight the face object by
extracting the nine spherical harmonic bases and the face spherical
illumination coefficients. First, an internal training illumination
database is generated by computing face albedo and face normal
from 2D images under different lighting conditions. Based on the
generated database, we analyze the target face pixels and compare
them with the training bootstrap by using pre-generated tiles. In this
work, practical real time processing speed and small image size were
considered when designing the framework. In contrast to other works,
our technique requires no 3D face models for the training process
and takes a single 2D image as an input. Experimental results on
publicly available databases show that the proposed technique works
well under severe lighting conditions with significant improvements
on the face recognition rates.
Abstract: In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.