Abstract: This paper indicate the importance of
telecommunications supervision systems (TSS), integrating
heterogeneous TSS into single system thru umbrella systems,
introduces the structure, features, requirements of TSS and TSS
related intelligent solutions.
Abstract: The scattering effect of light in fog improves the
difficulty in visibility thus introducing disturbances in transport
facilities in urban or industrial areas causing fatal accidents or public
harassments, therefore, developing an enhanced fog vision system
with radio wave to improvise the way outs of these severe problems
is really a big challenge for researchers. Series of experimental
studies already been done and more are in progress to know the
weather effect on radio frequencies for different ranges. According to
Rayleigh scattering Law, the propagating wavelength should be
greater than the diameter of the particle present in the penetrating
medium. Direct wave RF signal thus have high chance of failure to
work in such weather for detection of any object. Therefore an
extensive study was required to find suitable region in the RF band
that can help us in detecting objects with proper shape. This paper
produces some results on object detection using 912 MHz band with
successful detection of the persistence of any object coming under the
trajectory of a vehicle navigating in indoor and outdoor environment.
The developed images are finally transformed to video signal to
enable continuous monitoring.
Abstract: Utilization of various sensors has made it possible to
extend capabilities of industrial robots. Among these are vision
sensors that are used for providing visual information to assist robot
controllers. This paper presents a method of integrating a vision
system and a simulation program with an industrial robot. The vision
system is employed to detect a target object and compute its location
in the robot environment. Then, the target object-s information is sent
to the robot controller via parallel communication port. The robot
controller uses the extracted object information and the simulation
program to control the robot arm for approaching, grasping and
relocating the object. This paper presents technical details of system
components and describes the methodology used for this integration.
It also provides a case study to prove the validity of the methodology
developed.
Abstract: This article presents the developments of efficient
algorithms for tablet copies comparison. Image recognition has
specialized use in digital systems such as medical imaging,
computer vision, defense, communication etc. Comparison between
two images that look indistinguishable is a formidable task. Two
images taken from different sources might look identical but due to
different digitizing properties they are not. Whereas small variation
in image information such as cropping, rotation, and slight
photometric alteration are unsuitable for based matching
techniques. In this paper we introduce different matching
algorithms designed to facilitate, for art centers, identifying real
painting images from fake ones. Different vision algorithms for
local image features are implemented using MATLAB. In this
framework a Table Comparison Computer Tool “TCCT" is
designed to facilitate our research. The TCCT is a Graphical Unit
Interface (GUI) tool used to identify images by its shapes and
objects. Parameter of vision system is fully accessible to user
through this graphical unit interface. And then for matching, it
applies different description technique that can identify exact
figures of objects.
Abstract: This paper addresses the development of an intelligent vision system for human-robot interaction. The two novel contributions of this paper are 1) Detection of human faces and 2) Localizing the eye. The method is based on visual attributes of human skin colors and geometrical analysis of face skeleton. This paper introduces a spatial domain filtering method named ?Fuzzily skewed filter' which incorporates Fuzzy rules for deciding the gray level of pixels in the image in their neighborhoods and takes advantages of both the median and averaging filters. The effectiveness of the method has been justified over implementing the eye tracking commands to an entertainment robot, named ''AIBO''.
Abstract: Optical 3D measurement of objects is meaningful in
numerous industrial applications. In various cases shape acquisition
of weak textured objects is essential. Examples are repetition parts
made of plastic or ceramic such as housing parts or ceramic bottles as
well as agricultural products like tubers. These parts are often
conveyed in a wobbling way during the automated optical inspection.
Thus, conventional 3D shape acquisition methods like laser scanning
might fail. In this paper, a novel approach for acquiring 3D shape of
weak textured and moving objects is presented. To facilitate such
measurements an active stereo vision system with structured light is
proposed. The system consists of multiple camera pairs and auxiliary
laser pattern generators. It performs the shape acquisition within one
shot and is beneficial for rapid inspection tasks. An experimental
setup including hardware and software has been developed and
implemented.
Abstract: This paper proposes a novel stereo vision technique
for top view book scanners which provide us with dense 3d point
clouds of page surfaces. This is a precondition to dewarp bound
volumes independent of 2d information on the page. Our method is
based on algorithms, which normally require the projection of pattern
sequences with structured light. We use image sequences of the
moving stripe lighting of the top view scanner instead of an additional
light projection. Thus the stereo vision setup is simplified without
losing measurement accuracy. Furthermore we improve a surface
model dewarping method through introducing a difference vector
based on real measurements. Although our proposed method is hardly
expensive neither in calculation time nor in hardware requirements
we present good dewarping results even for difficult examples.
Abstract: In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.
Abstract: One important objective in Precision Agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. In order to reach this goal, two major factors need to be considered: 1) the similar spectral signature, shape and texture between weeds and crops; 2) the irregular distribution of the weeds within the crop's field. This paper outlines an automatic computer vision system for the detection and differential spraying of Avena sterilis, a noxious weed growing in cereal crops. The proposed system involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and the weeds. From these attributes, a hybrid decision making approach determines if a cell must be or not sprayed. The hybrid approach uses the Support Vector Machines and the Fuzzy k-Means methods, combined through the fuzzy aggregation theory. This makes the main finding of this paper. The method performance is compared against other available strategies.
Abstract: Skin color is an important visual cue for computer
vision systems involving human users. In this paper we combine skin
color and optical flow for detection and tracking of skin regions. We
apply these techniques to gesture recognition with encouraging
results. We propose a novel skin similarity measure. For grouping
detected skin regions we propose a novel skin region grouping
mechanism. The proposed techniques work with any number of skin
regions making them suitable for a multiuser scenario.
Abstract: The amplitude response of infrared (IR) sensors
depends on the reflectance properties of the target. Therefore, in
order to use IR sensor for measuring distances accurately, prior
knowledge of the surface must be known. This paper describes the
Phong Illumination Model for determining the properties of a surface
and subsequently calculating the distance to the surface. The angular
position of the IR sensor is computed as normal to the surface for
simplifying the calculation. Ultrasonic (US) sensor can provide the
initial information on distance to obtain the parameters for this
method. In addition, the experimental results obtained by using
LabView are discussed. More care should be taken when placing the
objects from the sensors during acquiring data since the small change
in angle could show very different distance than the actual one.
Since stereo camera vision systems do not perform well under some
environmental conditions such as plain wall, glass surfaces, or poor
lighting conditions, the IR and US sensors can be used additionally to
improve the overall vision systems of mobile robots.
Abstract: Using bottom-up image processing algorithms to predict human eye fixations and extract the relevant embedded information in images has been widely applied in the design of active machine vision systems. Scene text is an important feature to be extracted, especially in vision-based mobile robot navigation as many potential landmarks such as nameplates and information signs contain text. This paper proposes an edge-based text region extraction algorithm, which is robust with respect to font sizes, styles, color/intensity, orientations, and effects of illumination, reflections, shadows, perspective distortion, and the complexity of image backgrounds. Performance of the proposed algorithm is compared against a number of widely used text localization algorithms and the results show that this method can quickly and effectively localize and extract text regions from real scenes and can be used in mobile robot navigation under an indoor environment to detect text based landmarks.
Abstract: Omni directional mobile robots have been popularly
employed in several applications especially in soccer player robots
considered in Robocup competitions. However, Omni directional
navigation system, Omni-vision system and solenoid kicking
mechanism in such mobile robots have not ever been combined. This
situation brings the idea of a robot with no head direction into
existence, a comprehensive Omni directional mobile robot. Such a
robot can respond more quickly and it would be capable for more
sophisticated behaviors with multi-sensor data fusion algorithm for
global localization base on the data fusion. This paper has tried to
focus on the research improvements in the mechanical, electrical and
software design of the robots of team ADRO Iran. The main
improvements are the world model, the new strategy framework,
mechanical structure, Omni-vision sensor for object detection, robot
path planning, active ball handling mechanism and the new kicker
design, , and other subjects related to mobile robot
Abstract: In this paper, a vision based system has been used for
controlling an industrial 3P Cartesian robot. The vision system will
recognize the target and control the robot by obtaining images from
environment and processing them. At the first stage, images from
environment are changed to a grayscale mode then it can diverse and
identify objects and noises by using a threshold objects which are
stored in different frames and then the main object will be
recognized. This will control the robot to achieve the target. A vision
system can be an appropriate tool for measuring errors of a robot in a
situation where the experimental test is conducted for a 3P robot.
Finally, the international standard ANSI/RIA R15.05-2 is used for
evaluating the path-related characteristics of the robot. To evaluate
the performance of the proposed method experimental test is carried
out.
Abstract: Localization is one of the critical issues in the field of
robot navigation. With an accurate estimate of the robot pose, robots will be capable of navigating in the environment autonomously and efficiently. In this paper, a hybrid Distributed Vision System (DVS)
for robot localization is presented. The presented approach integrates
odometry data from robot and images captured from overhead cameras
installed in the environment to help reduce possibilities of fail
localization due to effects of illumination, encoder accumulated errors,
and low quality range data. An odometry-based motion model is applied to predict robot poses, and robot images captured by overhead
cameras are then used to update pose estimates with HSV histogram-based measurement model. Experiment results show the
presented approach could localize robots in a global world coordinate system with localization errors within 100mm.
Abstract: Falling has been one of the major concerns and threats
to the independence of the elderly in their daily lives. With the
worldwide significant growth of the aging population, it is essential
to have a promising solution of fall detection which is able to operate
at high accuracy in real-time and supports large scale implementation
using multiple cameras. Field Programmable Gate Array (FPGA) is a
highly promising tool to be used as a hardware accelerator in many
emerging embedded vision based system. Thus, it is the main
objective of this paper to present an FPGA-based solution of visual
based fall detection to meet stringent real-time requirements with
high accuracy. The hardware architecture of visual based fall
detection which utilizes the pixel locality to reduce memory accesses
is proposed. By exploiting the parallel and pipeline architecture of
FPGA, our hardware implementation of visual based fall detection
using FGPA is able to achieve a performance of 60fps for a series of
video analytical functions at VGA resolutions (640x480). The results
of this work show that FPGA has great potentials and impacts in
enabling large scale vision system in the future healthcare industry
due to its flexibility and scalability.
Abstract: In this paper we propose a method for vision systems
to consistently represent functional dependencies between different
visual routines along with relational short- and long-term knowledge
about the world. Here the visual routines are bound to visual properties
of objects stored in the memory of the system. Furthermore,
the functional dependencies between the visual routines are seen
as a graph also belonging to the object-s structure. This graph is
parsed in the course of acquiring a visual property of an object to
automatically resolve the dependencies of the bound visual routines.
Using this representation, the system is able to dynamically rearrange
the processing order while keeping its functionality. Additionally, the
system is able to estimate the overall computational costs of a certain
action. We will also show that the system can efficiently use that
structure to incorporate already acquired knowledge and thus reduce
the computational demand.
Abstract: In this paper, we propose an architecture for easily
constructing a robot controller. The architecture is a multi-agent
system which has eight agents: the Man-machine interface, Task
planner, Task teaching editor, Motion planner, Arm controller,
Vehicle controller, Vision system and CG display. The controller has
three databases: the Task knowledge database, the Robot database and
the Environment database. Based on this controller architecture, we
are constructing an experimental power distribution line maintenance
robot system and are doing the experiment for the maintenance tasks,
for example, “Bolt insertion task".
Abstract: This paper presents recent work on the improvement
of the robotics vision based control strategy for underwater pipeline
tracking system. The study focuses on developing image processing
algorithms and a fuzzy inference system for the analysis of the
terrain. The main goal is to implement the supervisory fuzzy learning
control technique to reduce the errors on navigation decision due to
the pipeline occlusion problem. The system developed is capable of
interpreting underwater images containing occluded pipeline, seabed
and other unwanted noise. The algorithm proposed in previous work
does not explore the cooperation between fuzzy controllers,
knowledge and learnt data to improve the outputs for underwater
pipeline tracking. Computer simulations and prototype simulations
demonstrate the effectiveness of this approach. The system accuracy
level has also been discussed.
Abstract: This paper presents the communication network for
machine vision system to implement to control systems and logistics
applications in industrial environment. The real-time distributed over
the network is very important for communication among vision node,
image processing and control as well as the distributed I/O node. A
robust implementation both with respect to camera packaging and
data transmission has been accounted. This network consists of a
gigabit Ethernet network and a switch with integrated fire-wall is
used to distribute the data and provide connection to the imaging
control station and IEC-61131 conform signal integration comprising
the Modbus TCP protocol. The real-time and delay time properties
each part on the network were considered and worked out in this
paper.