Abstract: Recognizing human action from videos is an active
field of research in computer vision and pattern recognition. Human
activity recognition has many potential applications such as video
surveillance, human machine interaction, sport videos retrieval and
robot navigation. Actually, local descriptors and bag of visuals words
models achieve state-of-the-art performance for human action
recognition. The main challenge in features description is how to
represent efficiently the local motion information. Most of the
previous works focus on the extension of 2D local descriptors on 3D
ones to describe local information around every interest point. In this
paper, we propose a new spatio-temporal descriptor based on a spacetime
description of moving points. Our description is focused on an
Accordion representation of video which is well-suited to recognize
human action from 2D local descriptors without the need to 3D
extensions. We use the bag of words approach to represent videos.
We quantify 2D local descriptor describing both temporal and spatial
features with a good compromise between computational complexity
and action recognition rates. We have reached impressive results on
publicly available action data set
Abstract: It is important for an autonomous mobile robot to know
where it is in any time in an indoor environment. In this paper, we
design a relative self-localization algorithm. The algorithm compare
the interest point in two images and compute the relative displacement
and orientation to determent the posture. Firstly, we use the SURF
algorithm to extract the interest points of the ceiling. Second, in order
to reduce amount of calculation, a replacement SURF is used to extract
orientation and description of the interest points. At last, according to
the transformation of the interest points in two images, the relative
self-localization of the mobile robot will be estimated greatly.