Abstract: Mobile robots are used in a large field of scenarios,
like exploring contaminated areas, repairing oil rigs under water,
finding survivors in collapsed buildings, etc. Currently, there is no
unified intuitive user interface (UI) to control such complex mobile
robots. As a consequence, some scenarios are done without the
exploitation of experience and intuition of human teleoperators.
A novel framework has been developed to embed a flexible and
modular UI into a complete 3-D virtual reality simulation system.
This new approach wants to access maximum benefits of human
operators. Sensor information received from the robot is prepared for
an intuitive visualization. Virtual reality metaphors support the
operator in his decisions. These metaphors are integrated into a real
time stereo video stream. This approach is not restricted to any
specific type of mobile robot and allows for the operation of different
robot types with a consistent concept and user interface.
Abstract: An intuitive user interface for the teleoperation of mobile rescue robots is one key feature for a successful exploration of inaccessible and no-go areas. Therefore, we have developed a novel framework to embed a flexible and modular user interface into a complete 3-D virtual reality simulation system. Our approach is based on a client-server architecture to allow for a collaborative control of the rescue robot together with multiple clients on demand. Further, it is important that the user interface is not restricted to any specific type of mobile robot. Therefore, our flexible approach allows for the operation of different robot types with a consistent concept and user interface. In laboratory tests, we have evaluated the validity and effectiveness of our approach with the help of two different robot platforms and several input devices. As a result, an untrained person can intuitively teleoperate both robots without needing a familiarization time when changing the operating robot.