Abstract: Advances in clinical medical imaging have brought about the routine production of vast numbers of medical images that need to be analyzed. As a result an enormous amount of computer vision research effort has been targeted at achieving automated medical image analysis. Computed Tomography (CT) is highly accurate for diagnosing liver tumors. This study aimed to evaluate the potential role of the wavelet and the neural network in the differential diagnosis of liver tumors in CT images. The tumors considered in this study are hepatocellular carcinoma, cholangio carcinoma, hemangeoma and hepatoadenoma. Each suspicious tumor region was automatically extracted from the CT abdominal images and the textural information obtained was used to train the Probabilistic Neural Network (PNN) to classify the tumors. Results obtained were evaluated with the help of radiologists. The system differentiates the tumor with relatively high accuracy and is therefore clinically useful.
Abstract: Nowadays, with the emerging of the new applications
like robot control in image processing, artificial vision for visual
servoing is a rapidly growing discipline and Human-machine
interaction plays a significant role for controlling the robot. This
paper presents a new algorithm based on spatio-temporal volumes for
visual servoing aims to control robots. In this algorithm, after
applying necessary pre-processing on video frames, a spatio-temporal
volume is constructed for each gesture and feature vector is extracted.
These volumes are then analyzed for matching in two consecutive
stages. For hand gesture recognition and classification we tested
different classifiers including k-Nearest neighbor, learning vector
quantization and back propagation neural networks. We tested the
proposed algorithm with the collected data set and results showed the
correct gesture recognition rate of 99.58 percent. We also tested the
algorithm with noisy images and algorithm showed the correct
recognition rate of 97.92 percent in noisy images.
Abstract: Linear approximation of point spread function (PSF) is a new method for determining subpixel translations between images. The problem with the actual algorithm is the inability of determining translations larger than 1 pixel. In this paper a multiresolution technique is proposed to deal with the problem. Its performance is evaluated by comparison with two other well known registration method. In the proposed technique the images are downsampled in order to have a wider view. Progressively decreasing the downsampling rate up to the initial resolution and using linear approximation technique at each step, the algorithm is able to determine translations of several pixels in subpixel levels.
Abstract: In this paper we present a novel technique for data
hiding in binary document images. We use the concept of entropy in
order to identify document specific least distortive areas throughout
the binary document image. The document image is treated as any
other image and the proposed method utilizes the standard document
characteristics for the embedding process. Proposed method
minimizes perceptual distortion due to embedding and allows
watermark extraction without the requirement of any side information
at the decoder end.
Abstract: In this paper, we propose a novel approach for image
segmentation via fuzzification of Rènyi Entropy of Generalized
Distributions (REGD). The fuzzy REGD is used to precisely measure
the structural information of image and to locate the optimal
threshold desired by segmentation. The proposed approach draws
upon the postulation that the optimal threshold concurs with
maximum information content of the distribution. The contributions
in the paper are as follow: Initially, the fuzzy REGD as a measure of
the spatial structure of image is introduced. Then, we propose an
efficient entropic segmentation approach using fuzzy REGD.
However the proposed approach belongs to entropic segmentation
approaches (i.e. these approaches are commonly applied to grayscale
images), it is adapted to be viable for segmenting color images.
Lastly, diverse experiments on real images that show the superior
performance of the proposed method are carried out.
Abstract: World has entered in 21st century. The technology of
computer graphics and digital cameras is prevalent. High resolution
display and printer are available. Therefore high resolution images
are needed in order to produce high quality display images and high
quality prints. However, since high resolution images are not usually
provided, there is a need to magnify the original images. One
common difficulty in the previous magnification techniques is that of
preserving details, i.e. edges and at the same time smoothing the data
for not introducing the spurious artefacts. A definitive solution to this
is still an open issue. In this paper an image magnification using
adaptive interpolation by pixel level data-dependent geometrical
shapes is proposed that tries to take into account information about
the edges (sharp luminance variations) and smoothness of the image.
It calculate threshold, classify interpolation region in the form of
geometrical shapes and then assign suitable values inside
interpolation region to the undefined pixels while preserving the
sharp luminance variations and smoothness at the same time.
The results of proposed technique has been compared qualitatively
and quantitatively with five other techniques. In which the qualitative
results show that the proposed method beats completely the Nearest
Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The
quantitative results are competitive and consistent with NN, BL, BC
and others.
Abstract: Two algorithms are proposed to reduce the storage requirements for mammogram images. The input image goes through a shrinking process that converts the 16-bit images to 8-bits by using pixel-depth conversion algorithm followed by enhancement process. The performance of the algorithms is evaluated objectively and subjectively. A 50% reduction in size is obtained with no loss of significant data at the breast region.
Abstract: The huge development of new technologies and the
apparition of open communication system more and more
sophisticated create a new challenge to protect digital content from
piracy. Digital watermarking is a recent research axis and a new
technique suggested as a solution to these problems. This technique
consists in inserting identification information (watermark) into
digital data (audio, video, image, databases...) in an invisible and
indelible manner and in such a way not to degrade original medium-s
quality. Moreover, we must be able to correctly extract the
watermark despite the deterioration of the watermarked medium (i.e
attacks). In this paper we propose a system for watermarking satellite
images. We chose to embed the watermark into frequency domain,
precisely the discrete wavelet transform (DWT). We applied our
algorithm on satellite images of Tunisian center. The experiments
show satisfying results. In addition, our algorithm showed an
important resistance facing different attacks, notably the compression
(JEPG, JPEG2000), the filtering, the histogram-s manipulation and
geometric distortions such as rotation, cropping, scaling.
Abstract: One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.
Abstract: Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.
Abstract: This paper introduces and studies new indexing techniques for content-based queries in images databases. Indexing is the key to providing sophisticated, accurate and fast searches for queries in image data. This research describes a new indexing approach, which depends on linear modeling of signals, using bases for modeling. A basis is a set of chosen images, and modeling an image is a least-squares approximation of the image as a linear combination of the basis images. The coefficients of the basis images are taken together to serve as index for that image. The paper describes the implementation of the indexing scheme, and presents the findings of our extensive evaluation that was conducted to optimize (1) the choice of the basis matrix (B), and (2) the size of the index A (N). Furthermore, we compare the performance of our indexing scheme with other schemes. Our results show that our scheme has significantly higher performance.
Abstract: Choosing the right metadata is a critical, as good
information (metadata) attached to an image will facilitate its
visibility from a pile of other images. The image-s value is enhanced
not only by the quality of attached metadata but also by the technique
of the search. This study proposes a technique that is simple but
efficient to predict a single human image from a website using the
basic image data and the embedded metadata of the image-s content
appearing on web pages. The result is very encouraging with the
prediction accuracy of 95%. This technique may become a great
assist to librarians, researchers and many others for automatically and
efficiently identifying a set of human images out of a greater set of
images.
Abstract: In image processing and visualization, comparing two
bitmapped images needs to be compared from their pixels by matching
pixel-by-pixel. Consequently, it takes a lot of computational time
while the comparison of two vector-based images is significantly
faster. Sometimes these raster graphics images can be approximately
converted into the vector-based images by various techniques. After
conversion, the problem of comparing two raster graphics images
can be reduced to the problem of comparing vector graphics images.
Hence, the problem of comparing pixel-by-pixel can be reduced to
the problem of polynomial comparisons. In computer aided geometric
design (CAGD), the vector graphics images are the composition of
curves and surfaces. Curves are defined by a sequence of control
points and their polynomials. In this paper, the control points will be
considerably used to compare curves. The same curves after relocated
or rotated are treated to be equivalent while two curves after different
scaled are considered to be similar curves. This paper proposed an
algorithm for comparing the polynomial curves by using the control
points for equivalence and similarity. In addition, the geometric
object-oriented database used to keep the curve information has also
been defined in XML format for further used in curve comparisons.
Abstract: This paper describes a novel projection algorithm, the Projection Onto Span Algorithm (POSA) for wavelet-based superresolution and removing speckle (in wavelet domain) of unknown variance from Synthetic Aperture Radar (SAR) images. Although the POSA is good as a new superresolution algorithm for image enhancement, image metrology and biometric identification, here one will use it like a tool of despeckling, being the first time that an algorithm of super-resolution is used for despeckling of SAR images. Specifically, the speckled SAR image is decomposed into wavelet subbands; POSA is applied to the high subbands, and reconstruct a SAR image from the modified detail coefficients. Experimental results demonstrate that the new method compares favorably to several other despeckling methods on test SAR images.
Abstract: The quest of providing more secure identification
system has led to a rise in developing biometric systems. Dorsal
hand vein pattern is an emerging biometric which has attracted the
attention of many researchers, of late. Different approaches have
been used to extract the vein pattern and match them. In this work,
Principle Component Analysis (PCA) which is a method that has
been successfully applied on human faces and hand geometry is
applied on the dorsal hand vein pattern. PCA has been used to obtain
eigenveins which is a low dimensional representation of vein pattern
features. Low cost CCD cameras were used to obtain the vein
images. The extraction of the vein pattern was obtained by applying
morphology. We have applied noise reduction filters to enhance the
vein patterns. The system has been successfully tested on a database
of 200 images using a threshold value of 0.9. The results obtained are
encouraging.
Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.
Abstract: In this paper we present a new method for coin
identification. The proposed method adopts a hybrid scheme using
Eigenvalues of covariance matrix, Circular Hough Transform (CHT)
and Bresenham-s circle algorithm. The statistical and geometrical
properties of the small and large Eigenvalues of the covariance
matrix of a set of edge pixels over a connected region of support are
explored for the purpose of circular object detection. Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze
zero elements and contain only a small number of non-zero elements,
they provide an advantage of matrix storage space and computational
time. Neighborhood suppression scheme is used to find the valid
Hough peaks. The accurate position of the circumference pixels is
identified using Raster scan algorithm which uses geometrical
symmetry property. After finding circular objects, the proposed
method uses the texture on the surface of the coins called texton,
which are unique properties of coins, refers to the fundamental micro
structure in generic natural images. This method has been tested on
several real world images including coin and non-coin images. The
performance is also evaluated based on the noise withstanding
capability.
Abstract: In this work a novel approach for color image
segmentation using higher order entropy as a textural feature for
determination of thresholds over a two dimensional image histogram
is discussed. A similar approach is applied to achieve multi-level
thresholding in both grayscale and color images. The paper discusses
two methods of color image segmentation using RGB space as the
standard processing space. The threshold for segmentation is decided
by the maximization of conditional entropy in the two dimensional
histogram of the color image separated into three grayscale images of
R, G and B. The features are first developed independently for the
three ( R, G, B ) spaces, and combined to get different color
component segmentation. By considering local maxima instead of the
maximum of conditional entropy yields multiple thresholds for the
same image which forms the basis for multilevel thresholding.
Abstract: An unsupervised classification algorithm is derived
by modeling observed data as a mixture of several mutually
exclusive classes that are each described by linear combinations of
independent non-Gaussian densities. The algorithm estimates the
data density in each class by using parametric nonlinear functions
that fit to the non-Gaussian structure of the data. This improves
classification accuracy compared with standard Gaussian mixture
models. When applied to textures, the algorithm can learn basis
functions for images that capture the statistically significant structure
intrinsic in the images. We apply this technique to the problem of
unsupervised texture classification and segmentation.
Abstract: Medical images require special safety and confidentiality because critical judgment is done on the information provided by medical images. Transmission of medical image via internet or mobile phones demands strong security and copyright protection in telemedicine applications. Here, highly secured and robust watermarking technique is proposed for transmission of image data via internet and mobile phones. The Region of Interest (ROI) and Non Region of Interest (RONI) of medical image are separated. Only RONI is used for watermark embedding. This technique results in exact recovery of watermark with standard medical database images of size 512x512, giving 'correlation factor' equals to 1. The correlation factor for different attacks like noise addition, filtering, rotation and compression ranges from 0.90 to 0.95. The PSNR with weighting factor 0.02 is up to 48.53 dBs. The presented scheme is non blind and embeds hospital logo of 64x64 size.