Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments

In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

A Method to Compute Efficient 3D Helicopters Flight Trajectories Based on a Motion Polymorph-Primitives Algorithm

Finding the optimal 3D path of an aerial vehicle under flight mechanics constraints is a major challenge, especially when the algorithm has to produce real time results in flight. Kinematics models and Pythagorian Hodograph curves have been widely used in mobile robotics to solve this problematic. The level of difficulty is mainly driven by the number of constraints to be saturated at the same time while minimizing the total length of the path. In this paper, we suggest a pragmatic algorithm capable of saturating at the same time most of dimensioning helicopter 3D trajectories’ constraints like: curvature, curvature derivative, torsion, torsion derivative, climb angle, climb angle derivative, positions. The trajectories generation algorithm is able to generate versatile complex 3D motion primitives feasible by a helicopter with parameterization of the curvature and the climb angle. An upper ”motion primitives’ concatenation” algorithm is presented based. In this article we introduce a new way of designing three-dimensional trajectories based on what we call the ”Dubins gliding symmetry conjecture”. This extremely performing algorithm will be soon integrated to a real-time decisional system dealing with inflight safety issues.

Deadline Missing Prediction for Mobile Robots through the Use of Historical Data

Mobile robotics is gaining an increasingly important role in modern society. Several potentially dangerous or laborious tasks for human are assigned to mobile robots, which are increasingly capable. Many of these tasks need to be performed within a specified period, i.e, meet a deadline. Missing the deadline can result in financial and/or material losses. Mechanisms for predicting the missing of deadlines are fundamental because corrective actions can be taken to avoid or minimize the losses resulting from missing the deadline. In this work we propose a simple but reliable deadline missing prediction mechanism for mobile robots through the use of historical data and we use the Pioneer 3-DX robot for experiments and simulations, one of the most popular robots in academia.

Single-Camera EKF-vSLAM

This paper presents an Extended Kaman Filter implementation of a single-camera Visual Simultaneous Localization and Mapping algorithm, a novel algorithm for simultaneous localization and mapping problem widely studied in mobile robotics field. The algorithm is vision and odometry-based, The odometry data is incremental, and therefore it will accumulate error over time, since the robot may slip or may be lifted, consequently if the odometry is used alone we can not accurately estimate the robot position, in this paper we show that a combination of odometry and visual landmark via the extended Kalman filter can improve the robot position estimate. We use a Pioneer II robot and motorized pan tilt camera models to implement the algorithm.

Acquiring Contour Following Behaviour in Robotics through Q-Learning and Image-based States

In this work a visual and reactive contour following behaviour is learned by reinforcement. With artificial vision the environment is perceived in 3D, and it is possible to avoid obstacles that are invisible to other sensors that are more common in mobile robotics. Reinforcement learning reduces the need for intervention in behaviour design, and simplifies its adjustment to the environment, the robot and the task. In order to facilitate its generalisation to other behaviours and to reduce the role of the designer, we propose a regular image-based codification of states. Even though this is much more difficult, our implementation converges and is robust. Results are presented with a Pioneer 2 AT on a Gazebo 3D simulator.

Route Training in Mobile Robotics through System Identification

Fundamental sensor-motor couplings form the backbone of most mobile robot control tasks, and often need to be implemented fast, efficiently and nevertheless reliably. Machine learning techniques are therefore often used to obtain the desired sensor-motor competences. In this paper we present an alternative to established machine learning methods such as artificial neural networks, that is very fast, easy to implement, and has the distinct advantage that it generates transparent, analysable sensor-motor couplings: system identification through nonlinear polynomial mapping. This work, which is part of the RobotMODIC project at the universities of Essex and Sheffield, aims to develop a theoretical understanding of the interaction between the robot and its environment. One of the purposes of this research is to enable the principled design of robot control programs. As a first step towards this aim we model the behaviour of the robot, as this emerges from its interaction with the environment, with the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving Average models with eXogenous inputs). This method produces explicit polynomial functions that can be subsequently analysed using established mathematical methods. In this paper we demonstrate the fidelity of the obtained NARMAX models in the challenging task of robot route learning; we present a set of experiments in which a Magellan Pro mobile robot was taught to follow four different routes, always using the same mechanism to obtain the required control law.

Mobile Robot Navigation Using Local Model Networks

Developing techniques for mobile robot navigation constitutes one of the major trends in the current research on mobile robotics. This paper develops a local model network (LMN) for mobile robot navigation. The LMN represents the mobile robot by a set of locally valid submodels that are Multi-Layer Perceptrons (MLPs). Training these submodels employs Back Propagation (BP) algorithm. The paper proposes the fuzzy C-means (FCM) in this scheme to divide the input space to sub regions, and then a submodel (MLP) is identified to represent a particular region. The submodels then are combined in a unified structure. In run time phase, Radial Basis Functions (RBFs) are employed as windows for the activated submodels. This proposed structure overcomes the problem of changing operating regions of mobile robots. Read data are used in all experiments. Results for mobile robot navigation using the proposed LMN reflect the soundness of the proposed scheme.

A Fuzzy Logic Based Navigation of a Mobile Robot

One of the long standing challenging aspect in mobile robotics is the ability to navigate autonomously, avoiding modeled and unmodeled obstacles especially in crowded and unpredictably changing environment. A successful way of structuring the navigation task in order to deal with the problem is within behavior based navigation approaches. In this study, Issues of individual behavior design and action coordination of the behaviors will be addressed using fuzzy logic. A layered approach is employed in this work in which a supervision layer based on the context makes a decision as to which behavior(s) to process (activate) rather than processing all behavior(s) and then blending the appropriate ones, as a result time and computational resources are saved.