Abstract: Textures are replications, symmetries and
combinations of various basic patterns, usually with some random
variation one of the gray-level statistics. This article proposes a
new approach to Segment texture images. The proposed approach
proceeds in 2 stages. First, in this method, local texture information
of a pixel is obtained by fuzzy texture unit and global texture
information of an image is obtained by fuzzy texture spectrum.
The purpose of this paper is to demonstrate the usefulness of fuzzy
texture spectrum for texture Segmentation.
The 2nd Stage of the method is devoted to a decision process,
applying a global analysis followed by a fine segmentation,
which is only focused on ambiguous points. The above Proposed
approach was applied to brain image to identify the components
of brain in turn, used to locate the brain tumor and its Growth
rate.
Abstract: Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.
Abstract: Prior to the use of detectors, characteristics
comparison study was performed and baseline established. In patient
specific QA, the portal dosimetry mean values of area gamma,
average gamma and maximum gamma were 1.02, 0.31 and 1.31 with
standard deviation of 0.33, 0.03 and 0.14 for IMRT and the
corresponding values were 1.58, 0.48 and 1.73 with standard
deviation of 0.31, 0.06 and 0.66 for VMAT. With ImatriXX 2-D
array system, on an average 99.35% of the pixels passed the criteria
of 3%-3 mm gamma with standard deviation of 0.24 for dynamic
IMRT. For VMAT, the average value was 98.16% with a standard
deviation of 0.86. The results showed that both the systems can be
used in patient specific QA measurements for IMRT and VMAT.
The values obtained with the portal dosimetry system were found to
be relatively more consistent compared to those obtained with
ImatriXX 2-D array system.
Abstract: Linear approximation of point spread function (PSF) is a new method for determining subpixel translations between images. The problem with the actual algorithm is the inability of determining translations larger than 1 pixel. In this paper a multiresolution technique is proposed to deal with the problem. Its performance is evaluated by comparison with two other well known registration method. In the proposed technique the images are downsampled in order to have a wider view. Progressively decreasing the downsampling rate up to the initial resolution and using linear approximation technique at each step, the algorithm is able to determine translations of several pixels in subpixel levels.
Abstract: In this paper a fast motion estimation method for
H.264/AVC named Triplet Search Motion Estimation (TS-ME) is
proposed. Similar to some of the traditional fast motion estimation
methods and their improved proposals which restrict the search points
only to some selected candidates to decrease the computation
complexity, proposed algorithm separate the motion search process to
several steps but with some new features. First, proposed algorithm try
to search the real motion area using proposed triplet patterns instead of
some selected search points to avoid dropping into the local minimum.
Then, in the localized motion area a novel 3-step motion search
algorithm is performed. Proposed search patterns are categorized into
three rings on the basis of the distance from the search center. These
three rings are adaptively selected by referencing the surrounding
motion vectors to early terminate the motion search process. On the
other hand, computation reduction for sub pixel motion search is also
discussed considering the appearance probability of the sub pixel
motion vector. From the simulation results, motion estimation speed
improved by a factor of up to 38 when using proposed algorithm than
that of the reference software of H.264/AVC with ignorable picture
quality loss.
Abstract: Bleeding in the digestive duct is an important diagnostic parameter for patients. Blood in the endoscopic image can be determined by investigating the color tone of blood due to the degree of oxygenation, under- or over- illumination, food debris and secretions, etc. However, we found that how to pre-process raw images obtained from the capsule detectors was very important. We applied various image process methods suitable for the capsule endoscopic image in order to remove noises and unbalanced sensitivities for the image pixels. The results showed that much improvement was achieved by additional pre-processing techniques on the algorithm of determining bleeding areas.
Abstract: This manuscript presents, palmprint recognition by
combining different texture extraction approaches with high accuracy.
The Region of Interest (ROI) is decomposed into different frequencytime
sub-bands by wavelet transform up-to two levels and only the
approximate image of two levels is selected, which is known as
Approximate Image ROI (AIROI). This AIROI has information of
principal lines of the palm. The Competitive Index is used as the
features of the palmprint, in which six Gabor filters of different
orientations convolve with the palmprint image to extract the orientation
information from the image. The winner-take-all strategy
is used to select dominant orientation for each pixel, which is
known as Competitive Index. Further, PCA is applied to select highly
uncorrelated Competitive Index features, to reduce the dimensions of
the feature vector, and to project the features on Eigen space. The
similarity of two palmprints is measured by the Euclidean distance
metrics. The algorithm is tested on Hong Kong PolyU palmprint
database. Different AIROI of different wavelet filter families are also
tested with the Competitive Index and PCA. AIROI of db7 wavelet
filter achievs Equal Error Rate (EER) of 0.0152% and Genuine
Acceptance Rate (GAR) of 99.67% on the palm database of Hong
Kong PolyU.
Abstract: World has entered in 21st century. The technology of
computer graphics and digital cameras is prevalent. High resolution
display and printer are available. Therefore high resolution images
are needed in order to produce high quality display images and high
quality prints. However, since high resolution images are not usually
provided, there is a need to magnify the original images. One
common difficulty in the previous magnification techniques is that of
preserving details, i.e. edges and at the same time smoothing the data
for not introducing the spurious artefacts. A definitive solution to this
is still an open issue. In this paper an image magnification using
adaptive interpolation by pixel level data-dependent geometrical
shapes is proposed that tries to take into account information about
the edges (sharp luminance variations) and smoothness of the image.
It calculate threshold, classify interpolation region in the form of
geometrical shapes and then assign suitable values inside
interpolation region to the undefined pixels while preserving the
sharp luminance variations and smoothness at the same time.
The results of proposed technique has been compared qualitatively
and quantitatively with five other techniques. In which the qualitative
results show that the proposed method beats completely the Nearest
Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The
quantitative results are competitive and consistent with NN, BL, BC
and others.
Abstract: Two algorithms are proposed to reduce the storage requirements for mammogram images. The input image goes through a shrinking process that converts the 16-bit images to 8-bits by using pixel-depth conversion algorithm followed by enhancement process. The performance of the algorithms is evaluated objectively and subjectively. A 50% reduction in size is obtained with no loss of significant data at the breast region.
Abstract: In image processing and visualization, comparing two
bitmapped images needs to be compared from their pixels by matching
pixel-by-pixel. Consequently, it takes a lot of computational time
while the comparison of two vector-based images is significantly
faster. Sometimes these raster graphics images can be approximately
converted into the vector-based images by various techniques. After
conversion, the problem of comparing two raster graphics images
can be reduced to the problem of comparing vector graphics images.
Hence, the problem of comparing pixel-by-pixel can be reduced to
the problem of polynomial comparisons. In computer aided geometric
design (CAGD), the vector graphics images are the composition of
curves and surfaces. Curves are defined by a sequence of control
points and their polynomials. In this paper, the control points will be
considerably used to compare curves. The same curves after relocated
or rotated are treated to be equivalent while two curves after different
scaled are considered to be similar curves. This paper proposed an
algorithm for comparing the polynomial curves by using the control
points for equivalence and similarity. In addition, the geometric
object-oriented database used to keep the curve information has also
been defined in XML format for further used in curve comparisons.
Abstract: Detection and tracking of the lip contour is an important
issue in speechreading. While there are solutions for lip tracking
once a good contour initialization in the first frame is available,
the problem of finding such a good initialization is not yet solved
automatically, but done manually. We have developed a new tracking
solution for lip contour detection using only few landmarks (15
to 25) and applying the well known Active Shape Models (ASM).
The proposed method is a new LMS-like adaptive scheme based on
an Auto regressive (AR) model that has been fit on the landmark
variations in successive video frames. Moreover, we propose an extra
motion compensation model to address more general cases in lip
tracking. Computer simulations demonstrate a fair match between
the true and the estimated spatial pixels. Significant improvements
related to the well known LMS approach has been obtained via a
defined Frobenius norm index.
Abstract: Migration in breast cancer cell wound healing assay
had been studied using image fractal dimension analysis. The
migration of MDA-MB-231 cells (highly motile) in a wound healing
assay was captured using time-lapse phase contrast video microscopy
and compared to MDA-MB-468 cell migration (moderately motile).
The Higuchi fractal method was used to compute the fractal
dimension of the image intensity fluctuation along a single pixel
width region parallel to the wound. The near-wound region fractal
dimension was found to decrease three times faster in the MDA-MB-
231 cells initially as compared to the less cancerous MDA-MB-468
cells. The inner region fractal dimension was found to be fairly
constant for both cell types in time and suggests a wound influence
range of about 15 cell layer. The box-counting fractal dimension
method was also used to study region of interest (ROI). The MDAMB-
468 ROI area fractal dimension was found to decrease
continuously up to 7 hours. The MDA-MB-231 ROI area fractal
dimension was found to increase and is consistent with the behavior
of a HGF-treated MDA-MB-231 wound healing assay posted in the
public domain. A fractal dimension based capacity index has been
formulated to quantify the invasiveness of the MDA-MB-231 cells in
the perpendicular-to-wound direction. Our results suggest that image
intensity fluctuation fractal dimension analysis can be used as a tool
to quantify cell migration in terms of cancer severity and treatment
responses.
Abstract: Least Significant Bit (LSB) technique is the earliest
developed technique in watermarking and it is also the most simple,
direct and common technique. It essentially involves embedding the
watermark by replacing the least significant bit of the image data with
a bit of the watermark data. The disadvantage of LSB is that it is not
robust against attacks. In this study intermediate significant bit (ISB)
has been used in order to improve the robustness of the watermarking
system. The aim of this model is to replace the watermarked image
pixels by new pixels that can protect the watermark data against
attacks and at the same time keeping the new pixels very close to the
original pixels in order to protect the quality of watermarked image.
The technique is based on testing the value of the watermark pixel
according to the range of each bit-plane.
Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.
Abstract: Over the past years, the EMCCD has had a profound
influence on photon starved imaging applications relying on its unique
multiplication register based on the impact ionization effect in the
silicon. High signal-to-noise ratio (SNR) means high image quality.
Thus, SNR improvement is important for the EMCCD. This work
analyzes the SNR performance of an EMCCD with gain off and on. In
each mode, simplified SNR models are established for different
integration times. The SNR curves are divided into readout noise (or
CIC) region and shot noise region by integration time. Theoretical
SNR values comparing long frame integration and frame adding in
each region are presented and discussed to figure out which method is
more effective. In order to further improve the SNR performance,
pixel binning is introduced into the EMCCD. The results show that
pixel binning does obviously improve the SNR performance, but at the
expensive of the spatial resolution.
Abstract: In this paper we present a new method for coin
identification. The proposed method adopts a hybrid scheme using
Eigenvalues of covariance matrix, Circular Hough Transform (CHT)
and Bresenham-s circle algorithm. The statistical and geometrical
properties of the small and large Eigenvalues of the covariance
matrix of a set of edge pixels over a connected region of support are
explored for the purpose of circular object detection. Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze
zero elements and contain only a small number of non-zero elements,
they provide an advantage of matrix storage space and computational
time. Neighborhood suppression scheme is used to find the valid
Hough peaks. The accurate position of the circumference pixels is
identified using Raster scan algorithm which uses geometrical
symmetry property. After finding circular objects, the proposed
method uses the texture on the surface of the coins called texton,
which are unique properties of coins, refers to the fundamental micro
structure in generic natural images. This method has been tested on
several real world images including coin and non-coin images. The
performance is also evaluated based on the noise withstanding
capability.
Abstract: Histogram plays an important statistical role in digital
image processing. However, the existing quantum image models are
deficient to do this kind of image statistical processing because
different gray scales are not distinguishable. In this paper, a novel
quantum image representation model is proposed firstly in which the
pixels with different gray scales can be distinguished and operated
simultaneously. Based on the new model, a fast quantum algorithm of
constructing histogram for quantum image is designed. Performance
comparison reveals that the new quantum algorithm could achieve an
approximately quadratic speedup than the classical counterpart. The
proposed quantum model and algorithm have significant meanings for
the future researches of quantum image processing.
Abstract: Among all geo-hydrological relationships, rainfallrunoff
relationship is of utmost importance in any hydrological
investigation and water resource planning. Spatial variation, lag time
involved in obtaining areal estimates for the basin as a whole can
affect the parameterization in design stage as well as in planning
stage. In conventional hydrological processing of data, spatial aspect
is either ignored or interpolated at sub-basin level. Temporal
variation when analysed for different stages can provide clues for its
spatial effectiveness. The interplay of space-time variation at pixel
level can provide better understanding of basin parameters.
Sustenance of design structures for different return periods and their
spatial auto-correlations should be studied at different geographical
scales for better management and planning of water resources.
In order to understand the relative effect of spatio-temporal
variation in hydrological data network, a detailed geo-hydrological
analysis of Betwa river catchment falling in Lower Yamuna Basin is
presented in this paper. Moreover, the exact estimates about the
availability of water in the Betwa river catchment, especially in the
wake of recent Betwa-Ken linkage project, need thorough scientific
investigation for better planning. Therefore, an attempt in this
direction is made here to analyse the existing hydrological and
meteorological data with the help of SPSS, GIS and MS-EXCEL
software. A comparison of spatial and temporal correlations at subcatchment
level in case of upper Betwa reaches has been made to
demonstrate the representativeness of rain gauges. First, flows at
different locations are used to derive correlation and regression
coefficients. Then, long-term normal water yield estimates based on
pixel-wise regression coefficients of rainfall-runoff relationship have
been mapped. The areal values obtained from these maps can
definitely improve upon estimates based on point-based
extrapolations or areal interpolations.
Abstract: Several methods have been proposed for color image
compression but the reconstructed image had very low signal to noise
ratio which made it inefficient. This paper describes a lossy
compression technique for color images which overcomes the
drawbacks. The technique works on spatial domain where the pixel
values of RGB planes of the input color image is mapped onto two
dimensional planes. The proposed technique produced better results
than JPEG2000, 2DPCA and a comparative study is reported based
on the image quality measures such as PSNR and MSE.Experiments
on real time images are shown that compare this methodology with
previous ones and demonstrate its advantages.
Abstract: this paper gives a novel approach towards real-time speed estimation of multiple traffic vehicles using fuzzy logic and image processing techniques with proper arrangement of camera parameters. The described algorithm consists of several important steps. First, the background is estimated by computing median over time window of specific frames. Second, the foreground is extracted using fuzzy similarity approach (FSA) between estimated background pixels and the current frame pixels containing foreground and background. Third, the traffic lanes are divided into two parts for both direction vehicles for parallel processing. Finally, the speeds of vehicles are estimated by Maximum a Posterior Probability (MAP) estimator. True ground speed is determined by utilizing infrared sensors for three different vehicles and the results are compared to the proposed algorithm with an accuracy of ± 0.74 kmph.