Abstract: In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.
Abstract: Sensors have been used in various kinds of academic
fields and applications. In this article, we propose the idea of
modularized sensors that combine multiple sensor modules into a
unique sensor. We divide a sensor into several units according to
functionalities. Each unit has different sensor modules, which share
the same type of connectors and can be serially and arbitrarily
connected each other. A user can combine different sensor modules
into a sensor platform according to requirements. Compared with
current modularized sensors, the proposed sensor platform is highly
flexible and reusable. We have implemented the prototype of the
proposed sensor platform, and the experimental results show the
proposed platform can work correctly.