Abstract: Segmentation techniques based on Active Contour
Models have been strongly benefited from the use of prior information
during their evolution. Shape prior information is captured from
a training set and is introduced in the optimization procedure to
restrict the evolution into allowable shapes. In this way, the evolution
converges onto regions even with weak boundaries. Although
significant effort has been devoted on different ways of capturing
and analyzing prior information, very little thought has been devoted
on the way of combining image information with prior information.
This paper focuses on a more natural way of incorporating the
prior information in the level set framework. For proof of concept
the method is applied on hippocampus segmentation in T1-MR
images. Hippocampus segmentation is a very challenging task, due
to the multivariate surrounding region and the missing boundary
with the neighboring amygdala, whose intensities are identical. The
proposed method, mimics the human segmentation way and thus
shows enhancements in the segmentation accuracy.
Abstract: In this paper, we propose a face recognition algorithm
using AAM and Gabor features. Gabor feature vectors which are well
known to be robust with respect to small variations of shape, scaling,
rotation, distortion, illumination and poses in images are popularly
employed for feature vectors for many object detection and
recognition algorithms. EBGM, which is prominent among face
recognition algorithms employing Gabor feature vectors, requires
localization of facial feature points where Gabor feature vectors are
extracted. However, localization method employed in EBGM is based
on Gabor jet similarity and is sensitive to initial values. Wrong
localization of facial feature points affects face recognition rate. AAM
is known to be successfully applied to localization of facial feature
points. In this paper, we devise a facial feature point localization
method which first roughly estimate facial feature points using AAM
and refine facial feature points using Gabor jet similarity-based facial
feature localization method with initial points set by the rough facial
feature points obtained from AAM, and propose a face recognition
algorithm using the devised localization method for facial feature
localization and Gabor feature vectors. It is observed through
experiments that such a cascaded localization method based on both
AAM and Gabor jet similarity is more robust than the localization
method based on only Gabor jet similarity. Also, it is shown that the
proposed face recognition algorithm using this devised localization
method and Gabor feature vectors performs better than the
conventional face recognition algorithm using Gabor jet
similarity-based localization method and Gabor feature vectors like
EBGM.
Abstract: The aim of this paper to characterize a larger set of
wavelet functions for implementation in a still image compression
system using SPIHT algorithm. This paper discusses important
features of wavelet functions and filters used in sub band coding to
convert image into wavelet coefficients in MATLAB. Image quality
is measured objectively using peak signal to noise ratio (PSNR) and
its variation with bit rate (bpp). The effect of different parameters is
studied on different wavelet functions. Our results provide a good
reference for application designers of wavelet based coder.
Abstract: In this paper, a new probability density function (pdf)
is proposed to model the statistics of wavelet coefficients, and a
simple Kalman-s filter is derived from the new pdf using Bayesian
estimation theory. Specifically, we decompose the speckled image
into wavelet subbands, we apply the Kalman-s filter to the high
subbands, and reconstruct a despeckled image from the modified
detail coefficients. Experimental results demonstrate that our method
compares favorably to several other despeckling methods on test
synthetic aperture radar (SAR) images.
Abstract: Most of the image watermarking methods, using the properties of the human visual system (HVS), have been proposed in literature. The component of the visual threshold is usually related to either the spatial contrast sensitivity function (CSF) or the visual masking. Especially on the contrast masking, most methods have not mention to the effect near to the edge region. Since the HVS is sensitive what happens on the edge area. This paper proposes ultrasound image watermarking using the visual threshold corresponding to the HVS in which the coefficients in a DCT-block have been classified based on the texture, edge, and plain area. This classification method enables not only useful for imperceptibility when the watermark is insert into an image but also achievable a robustness of watermark detection. A comparison of the proposed method with other methods has been carried out which shown that the proposed method robusts to blockwise memoryless manipulations, and also robust against noise addition.
Abstract: This paper presents a system for discovering
association rules from collections of unstructured documents called
EART (Extract Association Rules from Text). The EART system
treats texts only not images or figures. EART discovers association
rules amongst keywords labeling the collection of textual documents.
The main characteristic of EART is that the system integrates XML
technology (to transform unstructured documents into structured
documents) with Information Retrieval scheme (TF-IDF) and Data
Mining technique for association rules extraction. EART depends on
word feature to extract association rules. It consists of four phases:
structure phase, index phase, text mining phase and visualization
phase. Our work depends on the analysis of the keywords in the
extracted association rules through the co-occurrence of the keywords
in one sentence in the original text and the existing of the keywords
in one sentence without co-occurrence. Experiments applied on a
collection of scientific documents selected from MEDLINE that are
related to the outbreak of H5N1 avian influenza virus.
Abstract: In this paper, we present an analytical analysis of the
representation of images as the magnitudes of their transform with
the discrete wavelets. Such a representation plays as a model for
complex cells in the early stage of visual processing and of high
technical usefulness for image understanding, because it makes the
representation insensitive to small local shifts. We found that if the
signals are band limited and of zero mean, then reconstruction from
the magnitudes is unique up to the sign for almost all signals. We
also present an iterative reconstruction algorithm which yields very
good reconstruction up to the sign minor numerical errors in the very
low frequencies.
Abstract: This study investigates the possibility providing gully
erosion map by the supervised classification of satellite images
(ETM+) in two mountainous and plain land types. These land types
were the part of Varamin plain, Tehran province, and Roodbar subbasin,
Guilan province, as plain and mountain land types,
respectively. The position of 652 and 124 ground control points were
recorded by GPS respectively in mountain and plain land types. Soil
gully erosion, land uses or plant covers were investigated in these
points. Regarding ground control points and auxiliary points, training
points of gully erosion and other surface features were introduced to
software (Ilwis 3.3 Academic). The supervised classified map of
gully erosion was prepared by maximum likelihood method and then,
overall accuracy of this map was computed. Results showed that the
possibility supervised classification of gully erosion isn-t possible,
although it need more studies for results generalization to other
mountainous regions. Also, with increasing land uses and other
surface features in plain physiography, it decreases the classification
of accuracy.
Abstract: This paper discusses EM algorithm and Bootstrap
approach combination applied for the improvement of the satellite
image fusion process. This novel satellite image fusion method based
on estimation theory EM algorithm and reinforced by Bootstrap
approach was successfully implemented and tested. The sensor
images are firstly split by a Bayesian segmentation method to
determine a joint region map for the fused image. Then, we use the
EM algorithm in conjunction with the Bootstrap approach to develop
the bootstrap EM fusion algorithm, hence producing the fused
targeted image. We proposed in this research to estimate the
statistical parameters from some iterative equations of the EM
algorithm relying on a reference of representative Bootstrap samples
of images. Sizes of those samples are determined from a new
criterion called 'hybrid criterion'. Consequently, the obtained results
of our work show that using the Bootstrap EM (BEM) in image
fusion improve performances of estimated parameters which involve
amelioration of the fused image quality; and reduce the computing
time during the fusion process.
Abstract: Medical imaging uses the advantage of digital
technology in imaging and teleradiology. In teleradiology systems
large amount of data is acquired, stored and transmitted. A major
technology that may help to solve the problems associated with the
massive data storage and data transfer capacity is data compression
and decompression. There are many methods of image compression
available. They are classified as lossless and lossy compression
methods. In lossy compression method the decompressed image
contains some distortion. Fractal image compression (FIC) is a lossy
compression method. In fractal image compression an image is
coded as a set of contractive transformations in a complete metric
space. The set of contractive transformations is guaranteed to
produce an approximation to the original image. In this paper FIC is
achieved by PIFS using quadtree partitioning. PIFS is applied on
different images like , Ultrasound, CT Scan, Angiogram, X-ray,
Mammograms. In each modality approximately twenty images are
considered and the average values of compression ratio and PSNR
values are arrived. In this method of fractal encoding, the
parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the
other standard parameters constant. For all modalities of images the
compression ratio and Peak Signal to Noise Ratio (PSNR) are
computed and studied. The quality of the decompressed image is
arrived by PSNR values. From the results it is observed that the
compression ratio increases with the tolerance factor and
mammogram has the highest compression ratio. The quality of the
image is not degraded upto an optimum value of tolerance factor,
Tmax, equal to 8, because of the properties of fractal compression.
Abstract: Among other factors that characterize satellite communication
channels is their high bit error rate. We present a system for
still image transmission over noisy satellite channels. The system
couples image compression together with error control codes to
improve the received image quality while maintaining its bandwidth
requirements. The proposed system is tested using a high resolution
satellite imagery simulated over the Rician fading channel. Evaluation
results show improvement in overall system including image quality
and bandwidth requirements compared to similar systems with different
coding schemes.
Abstract: 2D/3D registration is a special case of medical image
registration which is of particular interest to surgeons. Applications
of 2D/3D registration are [1] radiotherapy planning and treatment
verification, spinal surgery, hip replacement, neurointerventions and
aortic stenting. The purpose of this paper is to provide a literature
review of the main methods for image registration for the 2D/3D
case. At the end of the paper an algorithm is proposed for 2D/3D
registration based on the Chebyssev polynomials iteration loop.
Abstract: In this paper, we propose an approach of unsupervised
segmentation with fuzzy connectedness. Valid seeds are first specified
by an unsupervised method based on scale space theory. A region is
then extracted for each seed with a relative object extraction method of
fuzzy connectedness. Afterwards, regions are merged according to the
values between them of an introduced measure. Some theorems and
propositions are also provided to show the reasonableness of the
measure for doing mergence. Experiment results on a synthetic image,
a color image and a large amount of MR images of our method are
reported.
Abstract: A new method for color image segmentation using fuzzy logic is proposed in this paper. Our aim here is to automatically produce a fuzzy system for color classification and image segmentation with least number of rules and minimum error rate. Particle swarm optimization is a sub class of evolutionary algorithms that has been inspired from social behavior of fishes, bees, birds, etc, that live together in colonies. We use comprehensive learning particle swarm optimization (CLPSO) technique to find optimal fuzzy rules and membership functions because it discourages premature convergence. Here each particle of the swarm codes a set of fuzzy rules. During evolution, a population member tries to maximize a fitness criterion which is here high classification rate and small number of rules. Finally, particle with the highest fitness value is selected as the best set of fuzzy rules for image segmentation. Our results, using this method for soccer field image segmentation in Robocop contests shows 89% performance. Less computational load is needed when using this method compared with other methods like ANFIS, because it generates a smaller number of fuzzy rules. Large train dataset and its variety, makes the proposed method invariant to illumination noise
Abstract: Extraction of edge-end-pixels is an important step for the edge linking process to achieve edge-based image segmentation. This paper presents an algorithm to extract edge-end pixels together with their directional sensitivities as an augmentation to the currently available mathematical models. The algorithm is implemented in the Java environment because of its inherent compatibility with web interfaces since its main use is envisaged to be for remote image analysis on a virtual instrumentation platform.
Abstract: Recently, fast neural networks for object/face
detection were presented in [1-3]. The speed up factor of these
networks relies on performing cross correlation in the frequency
domain between the input image and the weights of the hidden
layer. But, these equations given in [1-3] for conventional and fast
neural networks are not valid for many reasons presented here. In
this paper, correct equations for cross correlation in the spatial and
frequency domains are presented. Furthermore, correct formulas for
the number of computation steps required by conventional and fast
neural networks given in [1-3] are introduced. A new formula for
the speed up ratio is established. Also, corrections for the equations
of fast multi scale object/face detection are given. Moreover,
commutative cross correlation is achieved. Simulation results show
that sub-image detection based on cross correlation in the frequency
domain is faster than classical neural networks.
Abstract: Detection, feature extraction and pose estimation of
people in images and video is made challenging by the variability of
human appearance, the complexity of natural scenes and the high
dimensionality of articulated body models and also the important
field in Image, Signal and Vision Computing in recent years. In this
paper, four types of people in 2D dimension image will be tested and
proposed. The system will extract the size and the advantage of them
(such as: tall fat, short fat, tall thin and short thin) from image. Fat
and thin, according to their result from the human body that has been
extract from image, will be obtained. Also the system extract every
size of human body such as length, width and shown them in output.
Abstract: Un-doped GaN film of thickness 1.90 mm, grown on
sapphire substrate were uniformly implanted with 325 keV Mn+ ions
for various fluences varying from 1.75 x 1015 - 2.0 x 1016 ions cm-2 at
3500 C substrate temperature. The structural, morphological and
magnetic properties of Mn ion implanted gallium nitride samples
were studied using XRD, AFM and SQUID techniques. XRD of the
sample implanted with various ion fluences showed the presence of
different magnetic phases of Ga3Mn, Ga0.6Mn0.4 and Mn4N.
However, the compositions of these phases were found to be
depended on the ion fluence. AFM images of non-implanted sample
showed micrograph with rms surface roughness 2.17 nm. Whereas
samples implanted with the various fluences showed the presence of
nano clusters on the surface of GaN. The shape, size and density of
the clusters were found to vary with respect to ion fluence. Magnetic
moment versus applied field curves of the samples implanted with
various fluences exhibit the hysteresis loops. The Curie temperature
estimated from zero field cooled and field cooled curves for the
samples implanted with the fluence of 1.75 x 1015, 1.5 x 1016 and 2.0
x 1016 ions cm-2 was found to be 309 K, 342 K and 350 K
respectively.
Abstract: In this paper, we present a system for content-based
retrieval of large database of classified satellite images, based on
user's relevance feedback (RF).Through our proposed system, we
divide each satellite image scene into small subimages, which stored
in the database. The modified radial basis functions neural network
has important role in clustering the subimages of database according
to the Euclidean distance between the query feature vector and the
other subimages feature vectors. The advantage of using RF
technique in such queries is demonstrated by analyzing the database
retrieval results.
Abstract: Drought is a phenomenon caused by
environmental and climatic changes. This phenomenon is
affected by shortage of rainfall and temperature. Dust is one
of important environmental problems caused by climate
change and drought. With recent multi-year drought, many
environmental crises caused by dust in Iran and Middle East.
Dust in the vast areas of the provinces occurs with high
frequency. By dust affecting many problems created in terms
of health, social and economic. In this study, we tried to study
the most important factors causing dust. In this way we have
used the satellite images and meteorological data. Finally,
strategies to deal with the dust will be mentioned.