Abstract: In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.
Abstract: Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.
Abstract: In the field of fashion design, 3D Mannequin is a kind
of assisting tool which could rapidly realize the design concepts.
While the concept of 3D Mannequin is applied to the computer added
fashion design, it will connect with the development and the
application of design platform and system. Thus, the situation
mentioned above revealed a truth that it is very critical to develop a
module of 3D Mannequin which would correspond with the necessity
of fashion design. This research proposes a concrete plan that
developing and constructing a system of 3D Mannequin with Kinect.
In the content, ergonomic measurements of objective human features
could be attained real-time through the implement with depth camera
of Kinect, and then the mesh morphing can be implemented through
transformed the locations of the control-points on the model by
inputting those ergonomic data to get an exclusive 3D mannequin
model. In the proposed methodology, after the scanned points from the
Kinect are revised for accuracy and smoothening, a complete human
feature would be reconstructed by the ICP algorithm with the method
of image processing. Also, the objective human feature could be
recognized to analyze and get real measurements. Furthermore, the
data of ergonomic measurements could be applied to shape morphing
for the division of 3D Mannequin reconstructed by feature curves. Due
to a standardized and customer-oriented 3D Mannequin would be
generated by the implement of subdivision, the research could be
applied to the fashion design or the presentation and display of 3D
virtual clothes. In order to examine the practicality of research
structure, a system of 3D Mannequin would be constructed with JAVA
program in this study. Through the revision of experiments the
practicability-contained research result would come out.