Robot Vision Application based on Complex 3D Pose Computation

The paper presents a technique suitable in robot vision applications where it is not possible to establish the object position from one view. Usually, one view pose calculation methods are based on the correspondence of image features established at a training step and exactly the same image features extracted at the execution step, for a different object pose. When such a correspondence is not feasible because of the lack of specific features a new method is proposed. In the first step the method computes from two views the 3D pose of feature points. Subsequently, using a registration algorithm, the set of 3D feature points extracted at the execution phase is aligned with the set of 3D feature points extracted at the training phase. The result is a Euclidean transform which have to be used by robot head for reorientation at execution step.




References:
[1] R. Hartley, A. Zisserman, Multiple View Geometry in Computer Vision.
Cambridge University Press, 2004, Second Edition.
[2] A. Fitzgibbon, Robust Registration of 2D and 3D point sets, Image and
Vision Computing 2003, BMVC 2001.
[3] D Lowe, Fitting parameterized three-dimensional models to images, IEEE PAMI 13(5): 441-450, May 1991.
[4] F Martin, R.Horaud, Multiple-camera tracking of rigid objects, International Journal of Robotics Research, Vol. 21, No.2, pp.97-113, February 2002.
[5] E. Trucco, A. Verri, Introductory techniques for 3D Computer Vision,
Prentice Hall, 1998.
[6] R.Y. Tsai, A versatile camera calibration technique for high-accuracy 3D
machine vision metrology using off-the-shelf TV cameras and lenses,
IEEE Journal of Robotics and Automation, Vol. RA-3, No. 4, August 1987, pp. 323-344.