Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments

In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

Advantages of Neural Network Based Air Data Estimation for Unmanned Aerial Vehicles

Redundancy requirements for UAV (Unmanned Aerial Vehicle) are hardly faced due to the generally restricted amount of available space and allowable weight for the aircraft systems, limiting their exploitation. Essential equipment as the Air Data, Attitude and Heading Reference Systems (ADAHRS) require several external probes to measure significant data as the Angle of Attack or the Sideslip Angle. Previous research focused on the analysis of a patented technology named Smart-ADAHRS (Smart Air Data, Attitude and Heading Reference System) as an alternative method to obtain reliable and accurate estimates of the aerodynamic angles. This solution is based on an innovative sensor fusion algorithm implementing soft computing techniques and it allows to obtain a simplified inertial and air data system reducing external devices. In fact, only one external source of dynamic and static pressures is needed. This paper focuses on the benefits which would be gained by the implementation of this system in UAV applications. A simplification of the entire ADAHRS architecture will bring to reduce the overall cost together with improved safety performance. Smart-ADAHRS has currently reached Technology Readiness Level (TRL) 6. Real flight tests took place on ultralight aircraft equipped with a suitable Flight Test Instrumentation (FTI). The output of the algorithm using the flight test measurements demonstrates the capability for this fusion algorithm to embed in a single device multiple physical and virtual sensors. Any source of dynamic and static pressure can be integrated with this system gaining a significant improvement in terms of versatility.

Investigating Activity Recognition Using 9-Axis Sensors and Filters in Wearable Devices

In this paper, we analyze major components of activity recognition (AR) in wearable device with 9-axis sensors and sensor fusion filters. 9-axis sensors commonly include 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. We chose sensor fusion filters as Kalman filter and Direction Cosine Matrix (DCM) filter. We also construct sensor fusion data from each activity sensor data and perform classification by accuracy of AR using Naïve Bayes and SVM. According to the classification results, we observed that the DCM filter and the specific combination of the sensing axes are more effective for AR in wearable devices while classifying walking, running, ascending and descending.

Localization by DKF Multi Sensor Fusion in the Uncertain Environments for Mobile Robot

This paper presents an optimized algorithm for robot localization which increases the correctness and accuracy of the estimating position of mobile robot to more than 150% of the past methods [1] in the uncertain and noisy environment. In this method the odometry and vision sensors are combined by an adapted well-known discrete kalman filter [2]. This technique also decreased the computation process of the algorithm by DKF simple implementation. The experimental trial of the algorithm is performed on the robocup middle size soccer robot; the system can be used in more general environments.

Intelligent Modeling of the Electrical Activity of the Human Heart

The aim of this contribution is to present a new approach in modeling the electrical activity of the human heart. A recurrent artificial neural network is being used in order to exhibit a subset of the dynamics of the electrical behavior of the human heart. The proposed model can also be used, when integrated, as a diagnostic tool of the human heart system. What makes this approach unique is the fact that every model is being developed from physiological measurements of an individual. This kind of approach is very difficult to apply successfully in many modeling problems, because of the complexity and entropy of the free variables describing the complex system. Differences between the modeled variables and the variables of an individual, measured at specific moments, can be used for diagnostic purposes. The sensor fusion used in order to optimize the utilization of biomedical sensors is another point that this paper focuses on. Sensor fusion has been known for its advantages in applications such as control and diagnostics of mechanical and chemical processes.

Design of Tracking Controllers for Medical Equipment Holders Using AHRS and MEMS Sensors

There are various kinds of medical equipment which requires relatively accurate positional adjustments for successful treatment. However, patients tend to move without notice during a certain span of operations. Therefore, it is common practice that accompanying operators adjust the focus of the equipment. In this paper, tracking controllers for medical equipment are suggested to replace the operators. The tracking controllers use AHRS sensor information to recognize the movements of patients. Sensor fusion is applied to reducing the error magnitudes through linear Kalman filters. The image processing of optical markers is included to adjust the accumulation errors of gyroscope sensor data especially for yaw angles. The tracking controller reduces the positional errors between the current focus of a device and the target position on the body of a patient. Since the sensing frequencies of AHRS sensors are very high compared to the physical movements, the control performance is satisfactory. The typical applications are, for example, ESWT or rTMS, which have the error ranges of a few centimeters.

Highly Flexible Modularized Sensor Platform

Sensors have been used in various kinds of academic fields and applications. In this article, we propose the idea of modularized sensors that combine multiple sensor modules into a unique sensor. We divide a sensor into several units according to functionalities. Each unit has different sensor modules, which share the same type of connectors and can be serially and arbitrarily connected each other. A user can combine different sensor modules into a sensor platform according to requirements. Compared with current modularized sensors, the proposed sensor platform is highly flexible and reusable. We have implemented the prototype of the proposed sensor platform, and the experimental results show the proposed platform can work correctly.

Sensor Fusion Based Discrete Kalman Filter for Outdoor Robot Navigation

The objective of the presented work is to implement the Kalman Filter into an application that reduces the influence of the environmental changes over the robot expected to navigate over a terrain of varying friction properties. The Discrete Kalman Filter is used to estimate the robot position, project the estimated current state ahead at time through time update and adjust the projected estimated state by an actual measurement at that time via the measurement update using the data coming from the infrared sensors, ultrasonic sensors and the visual sensor respectively. The navigation test has been performed in a real world environment and has been found to be robust.

Using Data Fusion for Biometric Verification

A wide spectrum of systems require reliable personal recognition schemes to either confirm or determine the identity of an individual person. This paper considers multimodal biometric system and their applicability to access control, authentication and security applications. Strategies for feature extraction and sensor fusion are considered and contrasted. Issues related to performance assessment, deployment and standardization are discussed. Finally future directions of biometric systems development are discussed.

A Unified Framework for a Robust Conflict-Free Robot Navigation

Many environment specific methods and systems for Robot Navigation exist. However vast strides in the evolution of navigation technologies and system techniques create the need for a general unified framework that is scalable, modular and dynamic. In this paper a Unified Framework for a Robust Conflict-free Robot Navigation System that can be used for either a structured or unstructured and indoor or outdoor environments has been proposed. The fundamental design aspects and implementation issues encountered during the development of the module are discussed. The results of the deployment of three major peripheral modules of the framework namely the GSM based communication module, GIS Module and GPS module are reported in this paper.