Abstract: This paper deals with the application for contentbased
image retrieval to extract color feature from natural images
stored in the image database by segmenting the image through
clustering. We employ a class of nonparametric techniques in which
the data points are regarded as samples from an unknown probability
density. Explicit computation of the density is avoided by using the
mean shift procedure, a robust clustering technique, which does not
require prior knowledge of the number of clusters, and does not
constrain the shape of the clusters. A non-parametric technique for
the recovery of significant image features is presented and
segmentation module is developed using the mean shift algorithm to
segment each image. In these algorithms, the only user set parameter
is the resolution of the analysis and either gray level or color images
are accepted as inputs. Extensive experimental results illustrate
excellent performance.
Abstract: This paper presents a novel template-based method to
detect objects of interest from real images by shape matching. To
locate a target object that has a similar shape to a given template
boundary, the proposed method integrates three components: contour
grouping, partial shape matching, and boundary verification. In the
first component, low-level image features, including edges and
corners, are grouped into a set of perceptually salient closed contours
using an extended ratio-contour algorithm. In the second component,
we develop a partial shape matching algorithm to identify the
fractions of detected contours that partly match given template
boundaries. Specifically, we represent template boundaries and
detected contours using landmarks, and apply a greedy algorithm to
search the matched landmark subsequences. For each matched
fraction between a template and a detected contour, we estimate an
affine transform that transforms the whole template into a hypothetic
boundary. In the third component, we provide an efficient algorithm
based on oriented edge lists to determine the target boundary from
the hypothetic boundaries by checking each of them against image
edges. We evaluate the proposed method on recognizing and
localizing 12 template leaves in a data set of real images with clutter
back-grounds, illumination variations, occlusions, and image noises.
The experiments demonstrate the high performance of our proposed
method1.
Abstract: The use of machine vision to inspect the outcome of
surgical tasks is investigated, with the aim of incorporating this
approach in robotic surgery systems. Machine vision is a non-contact
form of inspection i.e. no part of the vision system is in direct contact
with the patient, and is therefore well suited for surgery where
sterility is an important consideration,. As a proof-of-concept, three
primary surgical tasks for a common neurosurgical procedure were
inspected using machine vision. Experiments were performed on
cadaveric pig heads to simulate the two possible outcomes i.e.
satisfactory or unsatisfactory, for tasks involved in making a burr
hole, namely incision, retraction, and drilling. We identify low level
image features to distinguish the two outcomes, as well as report on
results that validate our proposed approach. The potential of using
machine vision in a surgical environment, and the challenges that
must be addressed, are identified and discussed.
Abstract: The number of features required to represent an image
can be very huge. Using all available features to recognize objects
can suffer from curse dimensionality. Feature selection and
extraction is the pre-processing step of image mining. Main issues in
analyzing images is the effective identification of features and
another one is extracting them. The mining problem that has been
focused is the grouping of features for different shapes. Experiments
have been conducted by using shape outline as the features. Shape
outline readings are put through normalization and dimensionality
reduction process using an eigenvector based method to produce a
new set of readings. After this pre-processing step data will be
grouped through their shapes. Through statistical analysis, these
readings together with peak measures a robust classification and
recognition process is achieved. Tests showed that the suggested
methods are able to automatically recognize objects through their
shapes. Finally, experiments also demonstrate the system invariance
to rotation, translation, scale, reflection and to a small degree of
distortion.
Abstract: The paper presents a technique suitable in robot
vision applications where it is not possible to establish the object position from one view. Usually, one view pose calculation methods
are based on the correspondence of image features established at a
training step and exactly the same image features extracted at the
execution step, for a different object pose. When such a
correspondence is not feasible because of the lack of specific features
a new method is proposed. In the first step the method computes
from two views the 3D pose of feature points. Subsequently, using a
registration algorithm, the set of 3D feature points extracted at the execution phase is aligned with the set of 3D feature points extracted
at the training phase. The result is a Euclidean transform which have
to be used by robot head for reorientation at execution step.
Abstract: Color constancy algorithms are generally based on the
simplified assumption about the spectral distribution or the reflection
attributes of the scene surface. However, in reality, these assumptions
are too restrictive. The methodology is proposed to extend existing
algorithm to applying color constancy locally to image patches rather
than globally to the entire images.
In this paper, a method based on low-level image features using
superpixels is proposed. Superpixel segmentation partition an image
into regions that are approximately uniform in size and shape. Instead
of using entire pixel set for estimating the illuminant, only superpixels
with the most valuable information are used. Based on large scale
experiments on real-world scenes, it can be derived that the estimation
is more accurate using superpixels than when using the entire image.