Abstract: We present a BeeBot, Binus Multi-client Intelligent Telepresence Robot, a custom-build robot system specifically designed for teleconference with multiple person using omni directional actuator. The robot is controlled using a computer networks, so the manager/supervisor can direct the robot to the intended person to start a discussion/inspection. People tracking and autonomous navigation are intelligent features of this robot. We build a web application for controlling the multi-client telepresence robot and open-source teleconference system used. Experimental result presented and we evaluated its performance.
Abstract: In this paper, a solution is presented for a robotic
manipulation problem in industrial settings. The problem is sensing
objects on a conveyor belt, identifying the target, planning and
tracking an interception trajectory between end effector and the
target. Such a problem could be formulated as combining object
recognition, tracking and interception. For this purpose, we integrated
a vision system to the manipulation system and employed tracking
algorithms. The control approach is implemented on a real industrial
manipulation setting, which consists of a conveyor belt, objects
moving on it, a robotic manipulator, and a visual sensor above the
conveyor. The trjectory for robotic interception at a rendezvous point
on the conveyor belt is analytically calculated. Test results show that
tracking the raget along this trajectory results in interception and
grabbing of the target object.
Abstract: The paper presents a technique suitable in robot
vision applications where it is not possible to establish the object position from one view. Usually, one view pose calculation methods
are based on the correspondence of image features established at a
training step and exactly the same image features extracted at the
execution step, for a different object pose. When such a
correspondence is not feasible because of the lack of specific features
a new method is proposed. In the first step the method computes
from two views the 3D pose of feature points. Subsequently, using a
registration algorithm, the set of 3D feature points extracted at the execution phase is aligned with the set of 3D feature points extracted
at the training phase. The result is a Euclidean transform which have
to be used by robot head for reorientation at execution step.