Virtual Gesture Screen System Based on 3D Visual Information and Multi-Layer Perceptron

Active research is underway on virtual touch screens that complement the physical limitations of conventional touch screens. This paper discusses a virtual touch screen that uses a multi-layer perceptron to recognize and control three-dimensional (3D) depth information from a time of flight (TOF) camera. This system extracts an object-s area from the image input and compares it with the trajectory of the object, which is learned in advance, to recognize gestures. The system enables the maneuvering of content in virtual space by utilizing human actions.

Moving Area Filter to Detect Object in Video Sequence from Moving Platform

Detecting object in video sequence is a challenging mission for identifying, tracking moving objects. Background removal considered as a basic step in detected moving objects tasks. Dual static cameras placed in front and rear moving platform gathered information which is used to detect objects. Background change regarding with speed and direction moving platform, so moving objects distinguished become complicated. In this paper, we propose framework allows detection moving object with variety of speed and direction dynamically. Object detection technique built on two levels the first level apply background removal and edge detection to generate moving areas. The second level apply Moving Areas Filter (MAF) then calculate Correlation Score (CS) for adjusted moving area. Merging moving areas with closer CS and marked as moving object. Experiment result is prepared on real scene acquired by dual static cameras without overlap in sense. Results showing accuracy in detecting objects compared with optical flow and Mixture Module Gaussian (MMG), Accurate ratio produced to measure accurate detection moving object.

Continuity Microplating using Image Processing

A real time image-guided electroplating system is proposed in this paper. Unlike previous electroplating systems, instead of using the intermittent mode to electroplate 500um long copper specimen, a CCD camera and a motion controller are used to adjust anode-cathode distance to obtain better results. Since the image of the gap distance is highly deteriorated due to complex chemical-electrical operation inside the electrolyte, to determine the gap distance, an image processing algorithm is developed and mainly based on the entropy and energy values. In addition, the color and incidence direction of light source are also discussed to help the image process in this paper. From the experiment results, the specimens created by the proposed system show better structure, better uniformity and better finishing surface compared to those by previous intermittent electroplating setup.

Geometric Modeling of Illumination on the TFT-LCD Panel using Bezier Surface

In this paper, we propose a geometric modeling of illumination on the patterned image containing etching transistor. This image is captured by a commercial camera during the inspection of a TFT-LCD panel. Inspection of defect is an important process in the production of LCD panel, but the regional difference in brightness, which has a negative effect on the inspection, is due to the uneven illumination environment. In order to solve this problem, we present a geometric modeling of illumination consisting of an interpolation using the least squares method and 3D modeling using bezier surface. Our computational time, by using the sampling method, is shorter than the previous methods. Moreover, it can be further used to correct brightness in every patterned image.

Wrap-around View Equipped on Mobile Robot

This paper presents a wrap-around view system with 4 smart cameras module and remote motion mobile robot control equipped with smart camera module system. The two-level scheme for remote motion control with smart-pad(IPAD) is introduced on this paper. In the low-level, the wrap-around view system is controlled or operated to keep the reference points lying around top view image plane. On the higher level, a robot image based motion controller is utilized to drive the mobile platform to reach the desired position or track the desired motion planning through image feature feedback. The design wrap-around view system equipped on presents such advantages as follows: 1) a satisfactory solution for the FOV and affine problem; 2) free of any complex and constraint with robot pose. The performance of the wrap-around view equipped on mobile robot remote control is proven by experimental results.

In-Situ Monitoring the Thermal Forming of Glass and Si Foils for Space X-Ray Telescopes

We developed a non-contact method for the in-situ monitoring of the thermal forming of glass and Si foils to optimize the manufacture of mirrors for high-resolution space x-ray telescopes. Their construction requires precise and light-weight segmented optics with angular resolution better than 5 arcsec. We used 75x25 mm Desag D263 glass foils 0.75 mm thick and 0.6 mm thick Si foils. The glass foils were shaped by free slumping on a frame at viscosities in the range of 109.3-1012 dPa·s, the Si foils by forced slumping above 1000°C. Using a Nikon D80 digital camera, we took snapshots of a foil-s shape every 5 min during its isothermal heat treatment. The obtained results we can use for computer simulations. By comparing the measured and simulated data, we can more precisely define material properties of the foils and optimize the forming technology.

Mobile to Server Face Recognition: A System Overview

This paper presents a system overview of Mobile to Server Face Recognition, which is a face recognition application developed specifically for mobile phones. Images taken from mobile phone cameras lack of quality due to the low resolution of the cameras. Thus, a prototype is developed to experiment the chosen method. However, this paper shows a result of system backbone without the face recognition functionality. The result demonstrated in this paper indicates that the interaction between mobile phones and server is successfully working. The result shown before the database is completely ready. The system testing is currently going on using real images and a mock-up database to test the functionality of the face recognition algorithm used in this system. An overview of the whole system including screenshots and system flow-chart are presented in this paper. This paper also presents the inspiration or motivation and the justification in developing this system.

Vehicle Velocity Estimation for Traffic Surveillance System

This paper describes an algorithm to estimate realtime vehicle velocity using image processing technique from the known camera calibration parameters. The presented algorithm involves several main steps. First, the moving object is extracted by utilizing frame differencing technique. Second, the object tracking method is applied and the speed is estimated based on the displacement of the object-s centroid. Several assumptions are listed to simplify the transformation of 2D images from 3D real-world images. The results obtained from the experiment have been compared to the estimated ground truth. From this experiment, it exhibits that the proposed algorithm has achieved the velocity accuracy estimation of about ± 1.7 km/h.

Partial 3D Reconstruction using Evolutionary Algorithms

When reconstructing a scenario, it is necessary to know the structure of the elements present on the scene to have an interpretation. In this work we link 3D scenes reconstruction to evolutionary algorithms through the vision stereo theory. We consider vision stereo as a method that provides the reconstruction of a scene using only a couple of images of the scene and performing some computation. Through several images of a scene, captured from different positions, vision stereo can give us an idea about the threedimensional characteristics of the world. Vision stereo usually requires of two cameras, making an analogy to the mammalian vision system. In this work we employ only a camera, which is translated along a path, capturing images every certain distance. As we can not perform all computations required for an exhaustive reconstruction, we employ an evolutionary algorithm to partially reconstruct the scene in real time. The algorithm employed is the fly algorithm, which employ “flies" to reconstruct the principal characteristics of the world following certain evolutionary rules.

Study on Plasma Creation and Propagation in a Pulsed Magnetoplasmadynamic Thruster

The performance and the plasma created by a pulsed magnetoplasmadynamic thruster for small satellite application is studied to understand better the ablation and plasma propagation processes occurring during the short-time discharge. The results can be applied to improve the quality of the thruster in terms of efficiency, and to tune the propulsion system to the needs required by the satellite mission. Therefore, plasma measurements with a high-speed camera and induction probes, and performance measurements of mass bit and impulse bit were conducted. Values for current sheet propagation speed, mean exhaust velocity and thrust efficiency were derived from these experimental data. A maximum in current sheet propagation was found by the high-speed camera measurements for a medium energy input and confirmed by the induction probes. A quasilinear tendency between the mass bit and the energy input, the current action integral respectively, was found, as well as a linear tendency between the created impulse and the discharge energy. The highest mean exhaust velocity and thrust efficiency was found for the highest energy input.

An Experimental Study of Tip Vortex Cavitation Inception in an Axial Flow Pump

The interaction of the blade tip with the casing boundary layer and the leakage flow may lead to a kind of cavitation namely tip vortex cavitation. In this study, the onset of tip vortex cavitation was experimentally investigated in an axial flow pump. For a constant speed and a fixed angle of attack and by changing the flow rate, the pump head, input power, output power and efficiency were calculated and the pump characteristic curves were obtained. The cavitation phenomenon was observed with a camera and a stroboscope. Finally, the critical flow region, which tip vortex cavitation might have occurred, was identified. The results show that just by adjusting the flow rate, out of the specified region, the possibility of occurring tip vortex cavitation, decreases to a great extent.

Video Mining for Creative Rendering

More and more home videos are being generated with the ever growing popularity of digital cameras and camcorders. For many home videos, a photo rendering, whether capturing a moment or a scene within the video, provides a complementary representation to the video. In this paper, a video motion mining framework for creative rendering is presented. The user-s capture intent is derived by analyzing video motions, and respective metadata is generated for each capture type. The metadata can be used in a number of applications, such as creating video thumbnail, generating panorama posters, and producing slideshows of video.

Robust Camera Calibration using Discrete Optimization

Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.

Performance Improvement of Moving Object Recognition and Tracking Algorithm using Parallel Processing of SURF and Optical Flow

The paper proposes a way of parallel processing of SURF and Optical Flow for moving object recognition and tracking. The object recognition and tracking is one of the most important task in computer vision, however disadvantage are many operations cause processing speed slower so that it can-t do real-time object recognition and tracking. The proposed method uses a typical way of feature extraction SURF and moving object Optical Flow for reduce disadvantage and real-time moving object recognition and tracking, and parallel processing techniques for speed improvement. First analyse that an image from DB and acquired through the camera using SURF for compared to the same object recognition then set ROI (Region of Interest) for tracking movement of feature points using Optical Flow. Secondly, using Multi-Thread is for improved processing speed and recognition by parallel processing. Finally, performance is evaluated and verified efficiency of algorithm throughout the experiment.

A Study on Algorithm Fusion for Recognition and Tracking of Moving Robot

This paper presents an algorithm for the recognition and tracking of moving objects, 1/10 scale model car is used to verify performance of the algorithm. Presented algorithm for the recognition and tracking of moving objects in the paper is as follows. SURF algorithm is merged with Lucas-Kanade algorithm. SURF algorithm has strong performance on contrast, size, rotation changes and it recognizes objects but it is slow due to many computational complexities. Processing speed of Lucas-Kanade algorithm is fast but the recognition of objects is impossible. Its optical flow compares the previous and current frames so that can track the movement of a pixel. The fusion algorithm is created in order to solve problems which occurred using the Kalman Filter to estimate the position and the accumulated error compensation algorithm was implemented. Kalman filter is used to create presented algorithm to complement problems that is occurred when fusion two algorithms. Kalman filter is used to estimate next location, compensate for the accumulated error. The resolution of the camera (Vision Sensor) is fixed to be 640x480. To verify the performance of the fusion algorithm, test is compared to SURF algorithm under three situations, driving straight, curve, and recognizing cars behind the obstacles. Situation similar to the actual is possible using a model vehicle. Proposed fusion algorithm showed superior performance and accuracy than the existing object recognition and tracking algorithms. We will improve the performance of the algorithm, so that you can experiment with the images of the actual road environment.