Abstract: The large pose discrepancy is one of the critical
challenges in face recognition during video surveillance. Due to
the entanglement of pose attributes with identity information, the
conventional approaches for pose-independent representation lack
in providing quality results in recognizing largely posed faces. In
this paper, we propose a practical approach to disentangle the pose
attribute from the identity information followed by synthesis of a face
using a classifier network in latent space. The proposed approach
employs a modified generative adversarial network framework
consisting of an encoder-decoder structure embedded with a classifier
in manifold space for carrying out factorization on the latent
encoding. It can be further generalized to other face and non-face
attributes for real-life video frames containing faces with significant
attribute variations. Experimental results and comparison with state
of the art in the field prove that the learned representation of the
proposed approach synthesizes more compelling perceptual images
through a combination of adversarial and classification losses.
Abstract: We developed a multi-camera control system that a (one) cameraman can operate several cameras at a compact studio. we analyzed a workflow of a cameraman of some program shootings with two cameras and clarified their heavy tasks. The system based on a dynamic workflow which adapts a program progressing and recommends of cameraman. we perform the automation of multicamera controls by modeling of studio environment and perform automatic camera adjustment for suitable angle of view with face detection. Our experiment at a real program shooting showed that one cameraman can carry out the task of shooting sufficiently.
Abstract: This paper presents a new technique for detection of
human faces within color images. The approach relies on image
segmentation based on skin color, features extracted from the two-dimensional
discrete cosine transform (DCT), and self-organizing
maps (SOM). After candidate skin regions are extracted, feature
vectors are constructed using DCT coefficients computed from those
regions. A supervised SOM training session is used to cluster feature
vectors into groups, and to assign “face" or “non-face" labels to those
clusters. Evaluation was performed using a new image database of
286 images, containing 1027 faces. After training, our detection
technique achieved a detection rate of 77.94% during subsequent
tests, with a false positive rate of 5.14%. To our knowledge, the
proposed technique is the first to combine DCT-based feature
extraction with a SOM for detecting human faces within color
images. It is also one of a few attempts to combine a feature-invariant
approach, such as color-based skin segmentation, together with
appearance-based face detection. The main advantage of the new
technique is its low computational requirements, in terms of both
processing speed and memory utilization.
Abstract: In this study, a Loop Back Algorithm for component
connected labeling for detecting objects in a digital image is
presented. The approach is using loop back connected component
labeling algorithm that helps the system to distinguish the object
detected according to their label. Deferent than whole window
scanning technique, this technique reduces the searching time for
locating the object by focusing on the suspected object based on
certain features defined. In this study, the approach was also
implemented for a face detection system. Face detection system is
becoming interesting research since there are many devices or
systems that require detecting the face for certain purposes. The input
can be from still image or videos, therefore the sub process of this
system has to be simple, efficient and accurate to give a good result.
Abstract: Face detection and recognition has many applications
in a variety of fields such as security system, videoconferencing and
identification. Face classification is currently implemented in
software. A hardware implementation allows real-time processing,
but has higher cost and time to-market.
The objective of this work is to implement a classifier based on
neural networks MLP (Multi-layer Perceptron) for face detection.
The MLP is used to classify face and non-face patterns. The systm is
described using C language on a P4 (2.4 Ghz) to extract weight
values. Then a Hardware implementation is achieved using VHDL
based Methodology. We target Xilinx FPGA as the implementation
support.
Abstract: To increase reliability of face recognition system, the
system must be able to distinguish real face from a copy of face such
as a photograph. In this paper, we propose a fast and memory efficient
method of live face detection for embedded face recognition system,
based on the analysis of the movement of the eyes. We detect eyes in
sequential input images and calculate variation of each eye region to
determine whether the input face is a real face or not. Experimental
results show that the proposed approach is competitive and promising
for live face detection.
Abstract: This paper proposes a new approach to perform the
problem of real-time face detection. The proposed method combines
primitive Haar-Like feature and variance value to construct a new
feature, so-called Variance based Haar-Like feature. Face in image
can be represented with a small quantity of features using this
new feature. We used SVM instead of AdaBoost for training and
classification. We made a database containing 5,000 face samples
and 10,000 non-face samples extracted from real images for learning
purposed. The 5,000 face samples contain many images which have
many differences of light conditions. And experiments showed that
face detection system using Variance based Haar-Like feature and
SVM can be much more efficient than face detection system using
primitive Haar-Like feature and AdaBoost. We tested our method on
two Face databases and one Non-Face database. We have obtained
96.17% of correct detection rate on YaleB face database, which is
higher 4.21% than that of using primitive Haar-Like feature and
AdaBoost.
Abstract: In the past years a lot of effort has been made in the
field of face detection. The human face contains important features
that can be used by vision-based automated systems in order to
identify and recognize individuals. Face location, the primary step of
the vision-based automated systems, finds the face area in the input
image. An accurate location of the face is still a challenging task.
Viola-Jones framework has been widely used by researchers in order
to detect the location of faces and objects in a given image. Face
detection classifiers are shared by public communities, such as
OpenCV. An evaluation of these classifiers will help researchers to
choose the best classifier for their particular need. This work focuses
of the evaluation of face detection classifiers minding facial
landmarks.