Abstract: One of the popular methods for recognition of facial
expressions such as happiness, sadness and surprise is based on
deformation of facial features. Motion vectors which show these
deformations can be specified by the optical flow. In this method, for
detecting emotions, the resulted set of motion vectors are compared
with standard deformation template that caused by facial expressions.
In this paper, a new method is introduced to compute the quantity of
likeness in order to make decision based on the importance of
obtained vectors from an optical flow approach. For finding the
vectors, one of the efficient optical flow method developed by
Gautama and VanHulle[17] is used. The suggested method has been
examined over Cohn-Kanade AU-Coded Facial Expression Database,
one of the most comprehensive collections of test images available.
The experimental results show that our method could correctly
recognize the facial expressions in 94% of case studies. The results
also show that only a few number of image frames (three frames) are
sufficient to detect facial expressions with rate of success of about
83.3%. This is a significant improvement over the available methods.
Abstract: In this work we develop an object extraction method
and propose efficient algorithms for object motion characterization.
The set of proposed tools serves as a basis for development of objectbased
functionalities for manipulation of video content. The
estimators by different algorithms are compared in terms of quality
and performance and tested on real video sequences. The proposed
method will be useful for the latest standards of encoding and
description of multimedia content – MPEG4 and MPEG7.
Abstract: Realistic 3D face model is desired in various
applications such as face recognition, games, avatars, animations, and
etc. Construction of 3D face model is composed of 1) building a face
shape model and 2) rendering the face shape model. Thus, building a
realistic 3D face shape model is an essential step for realistic 3D face
model. Recently, 3D morphable model is successfully introduced to
deal with the various human face shapes. 3D dense correspondence
problem should be precedently resolved for constructing a realistic 3D
dense morphable face shape model. Several approaches to 3D dense
correspondence problem in 3D face modeling have been proposed
previously, and among them optical flow based algorithms and TPS
(Thin Plate Spline) based algorithms are representative. Optical flow
based algorithms require texture information of faces, which is
sensitive to variation of illumination. In TPS based algorithms
proposed so far, TPS process is performed on the 2D projection
representation in cylindrical coordinates of the 3D face data, not
directly on the 3D face data and thus errors due to distortion in data
during 2D TPS process may be inevitable.
In this paper, we propose a new 3D dense correspondence algorithm
for 3D dense morphable face shape modeling. The proposed algorithm
does not need texture information and applies TPS directly on 3D face
data. Through construction procedures, it is observed that the proposed
algorithm constructs realistic 3D face morphable model reliably and
fast.
Abstract: Motion estimation is a key problem in video
processing and computer vision. Optical flow motion estimation can
achieve high estimation accuracy when motion vector is small.
Three-step search algorithm can handle large motion vector but not
very accurate. A joint algorithm was proposed in this paper to
achieve high estimation accuracy disregarding whether the motion
vector is small or large, and keep the computation cost much lower
than full search.
Abstract: Vision based tracking problem is solved through a
combination of optical flow, MACH filter and log r-θ mapping.
Optical flow is used for detecting regions of movement in video
frames acquired under variable lighting conditions. The region of
movement is segmented and then searched for the target. A template
is used for target recognition on the segmented regions for detecting
the region of interest. The template is trained offline on a sequence of
target images that are created using the MACH filter and log r-θ
mapping. The template is applied on areas of movement in
successive frames and strong correlation is seen for in-class targets.
Correlation peaks above a certain threshold indicate the presence of
target and the target is tracked over successive frames.
Abstract: Detecting object in video sequence is a challenging
mission for identifying, tracking moving objects. Background
removal considered as a basic step in detected moving objects tasks.
Dual static cameras placed in front and rear moving platform
gathered information which is used to detect objects. Background
change regarding with speed and direction moving platform, so
moving objects distinguished become complicated. In this paper, we
propose framework allows detection moving object with variety of
speed and direction dynamically. Object detection technique built on
two levels the first level apply background removal and edge
detection to generate moving areas. The second level apply Moving
Areas Filter (MAF) then calculate Correlation Score (CS) for
adjusted moving area. Merging moving areas with closer CS and
marked as moving object. Experiment result is prepared on real scene
acquired by dual static cameras without overlap in sense. Results
showing accuracy in detecting objects compared with optical flow
and Mixture Module Gaussian (MMG), Accurate ratio produced to
measure accurate detection moving object.
Abstract: The paper proposes a way of parallel processing of
SURF and Optical Flow for moving object recognition and tracking.
The object recognition and tracking is one of the most important task
in computer vision, however disadvantage are many operations cause
processing speed slower so that it can-t do real-time object recognition
and tracking. The proposed method uses a typical way of feature
extraction SURF and moving object Optical Flow for reduce
disadvantage and real-time moving object recognition and tracking,
and parallel processing techniques for speed improvement. First
analyse that an image from DB and acquired through the camera using
SURF for compared to the same object recognition then set ROI
(Region of Interest) for tracking movement of feature points using
Optical Flow. Secondly, using Multi-Thread is for improved
processing speed and recognition by parallel processing. Finally,
performance is evaluated and verified efficiency of algorithm
throughout the experiment.
Abstract: This paper presents an algorithm for the recognition
and tracking of moving objects, 1/10 scale model car is used to verify
performance of the algorithm. Presented algorithm for the recognition
and tracking of moving objects in the paper is as follows. SURF
algorithm is merged with Lucas-Kanade algorithm. SURF algorithm
has strong performance on contrast, size, rotation changes and it
recognizes objects but it is slow due to many computational
complexities. Processing speed of Lucas-Kanade algorithm is fast but
the recognition of objects is impossible. Its optical flow compares the
previous and current frames so that can track the movement of a pixel.
The fusion algorithm is created in order to solve problems which
occurred using the Kalman Filter to estimate the position and the
accumulated error compensation algorithm was implemented. Kalman
filter is used to create presented algorithm to complement problems
that is occurred when fusion two algorithms. Kalman filter is used to
estimate next location, compensate for the accumulated error. The
resolution of the camera (Vision Sensor) is fixed to be 640x480. To
verify the performance of the fusion algorithm, test is compared to
SURF algorithm under three situations, driving straight, curve, and
recognizing cars behind the obstacles. Situation similar to the actual is
possible using a model vehicle. Proposed fusion algorithm showed
superior performance and accuracy than the existing object
recognition and tracking algorithms. We will improve the performance
of the algorithm, so that you can experiment with the images of the
actual road environment.