Robot Navigation and Localization Based on the Rat’s Brain Signals

The mobile robot ability to navigate autonomously in its environment is very important. Even though the advances in technology, robot self-localization and goal directed navigation in complex environments are still challenging tasks. In this article, we propose a novel method for robot navigation based on rat’s brain signals (Local Field Potentials). It has been well known that rats accurately and rapidly navigate in a complex space by localizing themselves in reference to the surrounding environmental cues. As the first step to incorporate the rat’s navigation strategy into the robot control, we analyzed the rats’ strategies while it navigates in a multiple Y-maze, and recorded Local Field Potentials (LFPs) simultaneously from three brain regions. Next, we processed the LFPs, and the extracted features were used as an input in the artificial neural network to predict the rat’s next location, especially in the decision-making moment, in Y-junctions. We developed an algorithm by which the robot learned to imitate the rat’s decision-making by mapping the rat’s brain signals into its own actions. Finally, the robot learned to integrate the internal states as well as external sensors in order to localize and navigate in the complex environment.

Development of 3D Laser Scanner for Robot Navigation

Autonomous robotic systems need an equipment like a human eye for their movement. In this study a 3D laser scanner has been designed and implemented for those autonomous robotic systems. In general 3D laser scanners are using 2 dimension laser range finders that are moving on one-axis (1D) to generate the model. In this study, the model has been obtained by a one-dimensional laser range finder that is moving in two –axis (2D) and because of this the laser scanner has been produced cheaper.

Kinematics and Control System Design of Manipulators for a Humanoid Robot

In this work, a new approach is proposed to control the manipulators for Humanoid robot. The kinematics of the manipulators in terms of joint positions, velocity, acceleration and torque of each joint is computed using the Denavit Hardenberg (D-H) notations. These variables are used to design the manipulator control system, which has been proposed in this work. In view of supporting the development of a controller, a simulation of the manipulator is designed for Humanoid robot. This simulation is developed through the use of the Virtual Reality Toolbox and Simulink in Matlab. The Virtual Reality Toolbox in Matlab provides the interfacing and controls to an environment which is developed based on the Virtual Reality Modeling Language (VRML). Chains of bones were used to represent the robot.

SIFT Accordion: A Space-Time Descriptor Applied to Human Action Recognition

Recognizing human action from videos is an active field of research in computer vision and pattern recognition. Human activity recognition has many potential applications such as video surveillance, human machine interaction, sport videos retrieval and robot navigation. Actually, local descriptors and bag of visuals words models achieve state-of-the-art performance for human action recognition. The main challenge in features description is how to represent efficiently the local motion information. Most of the previous works focus on the extension of 2D local descriptors on 3D ones to describe local information around every interest point. In this paper, we propose a new spatio-temporal descriptor based on a spacetime description of moving points. Our description is focused on an Accordion representation of video which is well-suited to recognize human action from 2D local descriptors without the need to 3D extensions. We use the bag of words approach to represent videos. We quantify 2D local descriptor describing both temporal and spatial features with a good compromise between computational complexity and action recognition rates. We have reached impressive results on publicly available action data set

GPS and Discrete Kalman Filter for Indoor Robot Navigation

This paper discusses the implementation of the Kalman Filter along with the Global Positioning System (GPS) for indoor robot navigation. Two dimensional coordinates is used for the map building, and refers to the global coordinate which is attached to the reference landmark for position and direction information the robot gets. The Discrete Kalman Filter is used to estimate the robot position, project the estimated current state ahead in time through time update and adjust the projected estimated state by an actual measurement at that time via the measurement update. The navigation test has been performed and has been found to be robust.

Simple Agents Benefit Only from Simple Brains

In order to answer the general question: “What does a simple agent with a limited life-time require for constructing a useful representation of the environment?" we propose a robot platform including the simplest probabilistic sensory and motor layers. Then we use the platform as a test-bed for evaluation of the navigational capabilities of the robot with different “brains". We claim that a protocognitive behavior is not a consequence of highly sophisticated sensory–motor organs but instead emerges through an increment of the internal complexity and reutilization of the minimal sensory information. We show that the most fundamental robot element, the short-time memory, is essential in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memoryless robot is usually superior. We also demonstrate how a low level action planning, involving essentially nonlinear dynamics, provides a considerable gain to the robot performance dynamically changing the robot strategy. Still, however, for very short life time the brainless robot is superior. Accordingly we suggest that small organisms (or agents) with short life-time does not require complex brains and even can benefit from simple brain-like (reflex) structures. To some extend this may mean that controlling blocks of modern robots are too complicated comparative to their life-time and mechanical abilities.

A Simulator for Robot Navigation Algorithms

A robot simulator was developed to measure and investigate the performance of a robot navigation system based on the relative position of the robot with respect to random obstacles in any two dimensional environment. The presented simulator focuses on investigating the ability of a fuzzy-neural system for object avoidance. A navigation algorithm is proposed and used to allow random navigation of a robot among obstacles when the robot faces an obstacle in the environment. The main features of this simulator can be used for evaluating the performance of any system that can provide the position of the robot with respect to obstacles in the environment. This allows a robot developer to investigate and analyze the performance of a robot without implementing the physical robot.

Mobile Robot Navigation Using Local Model Networks

Developing techniques for mobile robot navigation constitutes one of the major trends in the current research on mobile robotics. This paper develops a local model network (LMN) for mobile robot navigation. The LMN represents the mobile robot by a set of locally valid submodels that are Multi-Layer Perceptrons (MLPs). Training these submodels employs Back Propagation (BP) algorithm. The paper proposes the fuzzy C-means (FCM) in this scheme to divide the input space to sub regions, and then a submodel (MLP) is identified to represent a particular region. The submodels then are combined in a unified structure. In run time phase, Radial Basis Functions (RBFs) are employed as windows for the activated submodels. This proposed structure overcomes the problem of changing operating regions of mobile robots. Read data are used in all experiments. Results for mobile robot navigation using the proposed LMN reflect the soundness of the proposed scheme.

An Edge-based Text Region Extraction Algorithm for Indoor Mobile Robot Navigation

Using bottom-up image processing algorithms to predict human eye fixations and extract the relevant embedded information in images has been widely applied in the design of active machine vision systems. Scene text is an important feature to be extracted, especially in vision-based mobile robot navigation as many potential landmarks such as nameplates and information signs contain text. This paper proposes an edge-based text region extraction algorithm, which is robust with respect to font sizes, styles, color/intensity, orientations, and effects of illumination, reflections, shadows, perspective distortion, and the complexity of image backgrounds. Performance of the proposed algorithm is compared against a number of widely used text localization algorithms and the results show that this method can quickly and effectively localize and extract text regions from real scenes and can be used in mobile robot navigation under an indoor environment to detect text based landmarks.

Sensor Fusion Based Discrete Kalman Filter for Outdoor Robot Navigation

The objective of the presented work is to implement the Kalman Filter into an application that reduces the influence of the environmental changes over the robot expected to navigate over a terrain of varying friction properties. The Discrete Kalman Filter is used to estimate the robot position, project the estimated current state ahead at time through time update and adjust the projected estimated state by an actual measurement at that time via the measurement update using the data coming from the infrared sensors, ultrasonic sensors and the visual sensor respectively. The navigation test has been performed in a real world environment and has been found to be robust.

A Hybrid Distributed Vision System for Robot Localization

Localization is one of the critical issues in the field of robot navigation. With an accurate estimate of the robot pose, robots will be capable of navigating in the environment autonomously and efficiently. In this paper, a hybrid Distributed Vision System (DVS) for robot localization is presented. The presented approach integrates odometry data from robot and images captured from overhead cameras installed in the environment to help reduce possibilities of fail localization due to effects of illumination, encoder accumulated errors, and low quality range data. An odometry-based motion model is applied to predict robot poses, and robot images captured by overhead cameras are then used to update pose estimates with HSV histogram-based measurement model. Experiment results show the presented approach could localize robots in a global world coordinate system with localization errors within 100mm.

Supervisory Fuzzy Learning Control for Underwater Target Tracking

This paper presents recent work on the improvement of the robotics vision based control strategy for underwater pipeline tracking system. The study focuses on developing image processing algorithms and a fuzzy inference system for the analysis of the terrain. The main goal is to implement the supervisory fuzzy learning control technique to reduce the errors on navigation decision due to the pipeline occlusion problem. The system developed is capable of interpreting underwater images containing occluded pipeline, seabed and other unwanted noise. The algorithm proposed in previous work does not explore the cooperation between fuzzy controllers, knowledge and learnt data to improve the outputs for underwater pipeline tracking. Computer simulations and prototype simulations demonstrate the effectiveness of this approach. The system accuracy level has also been discussed.

A Unified Framework for a Robust Conflict-Free Robot Navigation

Many environment specific methods and systems for Robot Navigation exist. However vast strides in the evolution of navigation technologies and system techniques create the need for a general unified framework that is scalable, modular and dynamic. In this paper a Unified Framework for a Robust Conflict-free Robot Navigation System that can be used for either a structured or unstructured and indoor or outdoor environments has been proposed. The fundamental design aspects and implementation issues encountered during the development of the module are discussed. The results of the deployment of three major peripheral modules of the framework namely the GSM based communication module, GIS Module and GPS module are reported in this paper.