Abstract: Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.
Abstract: In this paper we presented a new method for tracking
flying targets in color video sequences based on contour and kernel.
The aim of this work is to overcome the problem of losing target in
changing light, large displacement, changing speed, and occlusion.
The proposed method is made in three steps, estimate the target
location by particle filter, segmentation target region using neural
network and find the exact contours by greedy snake algorithm. In
the proposed method we have used both region and contour
information to create target candidate model and this model is
dynamically updated during tracking. To avoid the accumulation of
errors when updating, target region given to a perceptron neural
network to separate the target from background. Then its output used
for exact calculation of size and center of the target. Also it is used as
the initial contour for the greedy snake algorithm to find the exact
target's edge. The proposed algorithm has been tested on a database
which contains a lot of challenges such as high speed and agility of
aircrafts, background clutter, occlusions, camera movement, and so
on. The experimental results show that the use of neural network
increases the accuracy of tracking and segmentation.
Abstract: Motion detection is a basic operation in the selection of significant segments of the video signals. For an effective Human Computer Intelligent Interaction, the computer needs to recognize the motion and track the moving object. Here an efficient neural network system is proposed for motion detection from the static background. This method mainly consists of four parts like Frame Separation, Rough Motion Detection, Network Formation and Training, Object Tracking. This paper can be used to verify real time detections in such a way that it can be used in defense applications, bio-medical applications and robotics. This can also be used for obtaining detection information related to the size, location and direction of motion of moving objects for assessment purposes. The time taken for video tracking by this Neural Network is only few seconds.
Abstract: In this paper we present the algorithm which allows
us to have an object tracking close to real time in Full HD videos.
The frame rate (FR) of a video stream is considered to be between
5 and 30 frames per second. The real time track building will be
achieved if the algorithm can follow 5 or more frames per second. The
principle idea is to use fast algorithms when doing preprocessing to
obtain the key points and track them after. The procedure of matching
points during assignment is hardly dependent on the number of points.
Because of this we have to limit pointed number of points using the
most informative of them.
Abstract: This paper addresses the problem of recognizing and
interpreting the behavior of human workers in industrial
environments for the purpose of integrating humans in software
controlled manufacturing environments. In this work we propose a
generic concept in order to derive solutions for task-related manual
production applications. Thus, we are able to use a versatile concept
providing flexible components and being less restricted to a specific
problem or application. We instantiate our concept in a spot welding
scenario in which the behavior of a human worker is interpreted
when performing a welding task with a hand welding gun. We
acquire signals from inertial sensors, video cameras and triggers and
recognize atomic actions by using pose data from a marker based
video tracking system and movement data from inertial sensors.
Recognized atomic actions are analyzed on a higher evaluation level
by a finite state machine.
Abstract: The Continuously Adaptive Mean-Shift (CamShift)
algorithm, incorporating scene depth information is combined with
the l1-minimization sparse representation based method to form a
hybrid kernel and state space-based tracking algorithm. We take
advantage of the increased efficiency of the former with the
robustness to occlusion property of the latter. A simple interchange
scheme transfers control between algorithms based upon drift and
occlusion likelihood. It is quantified by the projection of target
candidates onto a depth map of the 2D scene obtained with a low cost
stereo vision webcam. Results are improved tracking in terms of drift
over each algorithm individually, in a challenging practical outdoor
multiple occlusion test case.