Abstract: This paper investigates the potential use of airborne ultrasonic phased arrays for imaging in outdoor environments as a means of overcoming the limitations experienced by kinect sensors, which may fail to work in the outdoor environments due to the oversaturation of the infrared photo diodes. Ultrasonic phased arrays have been well studied for static media, yet there appears to be no comparable examination in the literature of the impact of a flowing medium on the focusing behaviour of near field focused ultrasonic arrays. This paper presents a method for predicting the sound pressure fields produced by a single ultrasound element or an ultrasonic phased array influenced by airflows. The approach can be used to determine the actual focal point location of an array exposed in a known flow field. From the presented simulation results based upon this model, it can be concluded that uniform flows in the direction orthogonal to the acoustic propagation have a noticeable influence on the sound pressure field, which is reflected in the twisting of the steering angle of the array. Uniform flows in the same direction as the acoustic propagation have negligible influence on the array. For an array impacted by a turbulent flow, determining the location of the focused sound field becomes difficult due to the irregularity and continuously changing direction and the speed of the turbulent flow. In some circumstances, ultrasonic phased arrays impacted by turbulent flows may not be capable of producing a focused sound field.
Abstract: This paper describes the process used in the
automation of the Maritime UAV commands using the Kinect sensor.
The AR Drone is a Quadrocopter manufactured by Parrot [1] to be
controlled using the Apple operating systems such as iPhones and
Ipads. However, this project uses the Microsoft Kinect SDK and
Microsoft Visual Studio C# (C sharp) software, which are compatible
with Windows Operating System for the automation of the navigation
and control of the AR drone.
The navigation and control software for the Quadrocopter runs on
a windows 7 computer. The project is divided into two sections; the
Quadrocopter control system and the Kinect sensor control system.
The Kinect sensor is connected to the computer using a USB cable
from which commands can be sent to and from the Kinect sensors.
The AR drone has Wi-Fi capabilities from which it can be connected
to the computer to enable transfer of commands to and from the
Quadrocopter.
The project was implemented in C#, a programming language that
is commonly used in the automation systems. The language was
chosen because there are more libraries already established in C# for
both the AR drone and the Kinect sensor.
The study will contribute toward research in automation of
systems using the Quadrocopter and the Kinect sensor for navigation
involving a human operator in the loop. The prototype created has
numerous applications among which include the inspection of vessels
such as ship, airplanes and areas that are not accessible by human
operators.
Abstract: Robots- visual perception is a field that is gaining
increasing attention from researchers. This is partly due to emerging
trends in the commercial availability of 3D scanning systems or
devices that produce a high information accuracy level for a variety of
applications. In the history of mining, the mortality rate of mine workers
has been alarming and robots exhibit a great deal of potentials to
tackle safety issues in mines. However, an effective vision system
is crucial to safe autonomous navigation in underground terrains.
This work investigates robots- perception in underground terrains
(mines and tunnels) using statistical region merging (SRM) model.
SRM reconstructs the main structural components of an imagery
by a simple but effective statistical analysis. An investigation is
conducted on different regions of the mine, such as the shaft, stope
and gallery, using publicly available mine frames, with a stream of
locally captured mine images. An investigation is also conducted on a
stream of underground tunnel image frames, using the XBOX Kinect
3D sensors. The Kinect sensors produce streams of red, green and
blue (RGB) and depth images of 640 x 480 resolution at 30 frames per
second. Integrating the depth information to drivability gives a strong
cue to the analysis, which detects 3D results augmenting drivable and
non-drivable regions in 2D. The results of the 2D and 3D experiment
with different terrains, mines and tunnels, together with the qualitative
and quantitative evaluation, reveal that a good drivable region can be
detected in dynamic underground terrains.