Abstract: The paper proposes an approach using genetic algorithm for computing the region based image similarity. The image is denoted using a set of segmented regions reflecting color and texture properties of an image. An image is associated with a family of image features corresponding to the regions. The resemblance of two images is then defined as the overall similarity between two families of image features, and quantified by a similarity measure, which integrates properties of all the regions in the images. A genetic algorithm is applied to decide the most plausible matching. The performance of the proposed method is illustrated using examples from an image database of general-purpose images, and is shown to produce good results.
Abstract: In this paper a new robust and efficient algorithm to automatic text extraction from colored book and journal cover sheets is proposed. First, we perform wavelet transform. Next for edge detecting from detail wavelet coefficient, we use dynamic threshold. By blurring approximate coefficients with alternative heuristic thresholding, achieve effective edge,. Afterward, with ROI technique get binary image. Finally text boxes would be extracted with new projection profile.
Abstract: Nothing that an effective cure for infertility happens
when we can find a unique solution, a great deal of study has been
done in this field and this is a hot research subject for to days study.
So we could analyze the men-s seaman and find out about fertility
and infertility and from this find a true cure for this, since this will be
a non invasive and low risk procedure, it will be greatly welcomed.
In this research, the procedure has been based on few Algorithms
enhancement and segmentation of images which has been done on
the images taken from microscope in different fertility institution and
have obtained a suitable result from the computer images which in
turn help us to distinguish these sperms from fluids and its
surroundings.
Abstract: Digital watermarking has become an important technique for copyright protection but its robustness against attacks remains a major problem. In this paper, we propose a normalizationbased robust image watermarking scheme. In the proposed scheme, original host image is first normalized to a standard form. Zernike transform is then applied to the normalized image to calculate Zernike moments. Dither modulation is adopted to quantize the magnitudes of Zernike moments according to the watermark bit stream. The watermark extracting method is a blind method. Security analysis and false alarm analysis are then performed. The quality degradation of watermarked image caused by the embedded watermark is visually transparent. Experimental results show that the proposed scheme has very high robustness against various image processing operations and geometric attacks.
Abstract: In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.
Abstract: Textures are replications, symmetries and
combinations of various basic patterns, usually with some random
variation one of the gray-level statistics. This article proposes a
new approach to Segment texture images. The proposed approach
proceeds in 2 stages. First, in this method, local texture information
of a pixel is obtained by fuzzy texture unit and global texture
information of an image is obtained by fuzzy texture spectrum.
The purpose of this paper is to demonstrate the usefulness of fuzzy
texture spectrum for texture Segmentation.
The 2nd Stage of the method is devoted to a decision process,
applying a global analysis followed by a fine segmentation,
which is only focused on ambiguous points. The above Proposed
approach was applied to brain image to identify the components
of brain in turn, used to locate the brain tumor and its Growth
rate.
Abstract: Star graphs are Cayley graphs of symmetric groups of permutations, with transpositions as the generating sets. A star graph is a preferred interconnection network topology to a hypercube for its ability to connect a greater number of nodes with lower degree. However, an attractive property of the hypercube is that it has a Hamiltonian decomposition, i.e. its edges can be partitioned into disjoint Hamiltonian cycles, and therefore a simple routing can be found in the case of an edge failure. The existence of Hamiltonian cycles in Cayley graphs has been known for some time. So far, there are no published results on the much stronger condition of the existence of Hamiltonian decompositions. In this paper, we give a construction of a Hamiltonian decomposition of the star graph 5-star of degree 4, by defining an automorphism for 5-star and a Hamiltonian cycle which is edge-disjoint with its image under the automorphism.
Abstract: This paper presents a novel iris recognition system
using 1D log polar Gabor wavelet and Euler numbers. 1D log polar
Gabor wavelet is used to extract the textural features, and Euler
numbers are used to extract topological features of the iris. The
proposed decision strategy uses these features to authenticate an
individual-s identity while maintaining a low false rejection rate. The
algorithm was tested on CASIA iris image database and found to
perform better than existing approaches with an overall accuracy of
99.93%.
Abstract: A fusion classifier composed of two modules, one made by a hidden Markov model (HMM) and the other by a support vector machine (SVM), is proposed to recognize faces with pose variations in open-set recognition settings. The HMM module captures the evolution of facial features across a subject-s face using the subject-s facial images only, without referencing to the faces of others. Because of the captured evolutionary process of facial features, the HMM module retains certain robustness against pose variations, yielding low false rejection rates (FRR) for recognizing faces across poses. This is, however, on the price of poor false acceptance rates (FAR) when recognizing other faces because it is built upon withinclass samples only. The SVM module in the proposed model is developed following a special design able to substantially diminish the FAR and further lower down the FRR. The proposed fusion classifier has been evaluated in performance using the CMU PIE database, and proven effective for open-set face recognition with pose variations. Experiments have also shown that it outperforms the face classifier made by HMM or SVM alone.
Abstract: A combination of image fusion and quad tree decomposition method is used for detecting the sunspot trajectories in each month and computation of the latitudes of these trajectories in each solar hemisphere. Daily solar images taken with SOHO satellite are fused for each month and the result of fused image is decomposed with Quad Tree decomposition method in order to classifying the sunspot trajectories and then to achieve the precise information about latitudes of sunspot trajectories. Also with fusion we deduce some physical remarkable conclusions about sun magnetic fields behavior. Using quad tree decomposition we give information about the region on sun surface and the space angle that tremendous flares and hot plasma gases permeate interplanetary space and attack to satellites and human technical systems. Here sunspot images in June, July and August 2001 are used for studying and give a method to compute the latitude of sunspot trajectories in each month with sunspot images.
Abstract: Advances in clinical medical imaging have brought about the routine production of vast numbers of medical images that need to be analyzed. As a result an enormous amount of computer vision research effort has been targeted at achieving automated medical image analysis. Computed Tomography (CT) is highly accurate for diagnosing liver tumors. This study aimed to evaluate the potential role of the wavelet and the neural network in the differential diagnosis of liver tumors in CT images. The tumors considered in this study are hepatocellular carcinoma, cholangio carcinoma, hemangeoma and hepatoadenoma. Each suspicious tumor region was automatically extracted from the CT abdominal images and the textural information obtained was used to train the Probabilistic Neural Network (PNN) to classify the tumors. Results obtained were evaluated with the help of radiologists. The system differentiates the tumor with relatively high accuracy and is therefore clinically useful.
Abstract: Nowadays, with the emerging of the new applications
like robot control in image processing, artificial vision for visual
servoing is a rapidly growing discipline and Human-machine
interaction plays a significant role for controlling the robot. This
paper presents a new algorithm based on spatio-temporal volumes for
visual servoing aims to control robots. In this algorithm, after
applying necessary pre-processing on video frames, a spatio-temporal
volume is constructed for each gesture and feature vector is extracted.
These volumes are then analyzed for matching in two consecutive
stages. For hand gesture recognition and classification we tested
different classifiers including k-Nearest neighbor, learning vector
quantization and back propagation neural networks. We tested the
proposed algorithm with the collected data set and results showed the
correct gesture recognition rate of 99.58 percent. We also tested the
algorithm with noisy images and algorithm showed the correct
recognition rate of 97.92 percent in noisy images.
Abstract: Current advancements in nanotechnology are dependent on the capabilities that can enable nano-scientists to extend their eyes and hands into the nano-world. For this purpose, a haptics (devices capable of recreating tactile or force sensations) based system for AFM (Atomic Force Microscope) is proposed. The system enables the nano-scientists to touch and feel the sample surfaces, viewed through AFM, in order to provide them with better understanding of the physical properties of the surface, such as roughness, stiffness and shape of molecular architecture. At this stage, the proposed work uses of ine images produced using AFM and perform image analysis to create virtual surfaces suitable for haptics force analysis. The research work is in the process of extension from of ine to online process where interaction will be done directly on the material surface for realistic analysis.
Abstract: Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.
Abstract: One of the essential requirements of a realistic
surgical simulator is to reproduce haptic sensations due to the
interactions in the virtual environment. However, the interaction need
to be performed in real-time, since a delay between the user action
and the system reaction reduces the immersion sensation. In this
paper, a prototype of a coronary stent implant simulator is present;
this system allows real-time interactions with an artery by means of a
specific haptic device. To improve the realism of the simulation, the
building of the virtual environment is based on real patients- images
and a Web Portal is used to search in the geographically remote
medical centres a virtual environment with specific features in terms
of pathology or anatomy. The functional architecture of the system
defines several Medical Centres in which virtual environments built
from the real patients- images and related metadata with specific
features in terms of pathology or anatomy are stored. The searched
data are downloaded from the Medical Centre to the Training Centre
provided with a specific haptic device and with the software
necessary both to manage the interaction in the virtual environment.
After the integration of the virtual environment in the simulation
system it is possible to perform training on the specific surgical
procedure.
Abstract: Webcam systems now function as the new privileged
vantage points from which to view the city. This transformation of
CCTV technology from surveillance to promotional tool is significant
because its'scopic regime' presents, back to the public, a new virtual
'site' that sits alongside its real-time counterpart. Significantly,
thisraw 'image' data can, in fact,be co-optedand processed so as to
disrupt their original purpose. This paper will demonstrate this
disruptive capacity through an architectural project. It will reveal how
the adaption the webcam image offers a technical springboard by
which to initiate alternate urban form making decisions and subvert
the disciplinary reliance on the 'flat' orthographic plan. In so doing,
the paper will show how this 'digital material' exceeds the imagistic
function of the image; shiftingit from being a vehicle of signification
to a site of affect.
Abstract: Coronary artery bypass grafts (CABG) are widely
studied with respect to hemodynamic conditions which play
important role in presence of a restenosis. However, papers which
concern with constitutive modeling of CABG are lacking in the
literature. The purpose of this study is to find a constitutive model for
CABG tissue. A sample of the CABG obtained within an autopsy
underwent an inflation–extension test. Displacements were
recoredered by CCD cameras and subsequently evaluated by digital
image correlation. Pressure – radius and axial force – elongation
data were used to fit material model. The tissue was modeled as onelayered
composite reinforced by two families of helical fibers. The
material is assumed to be locally orthotropic, nonlinear,
incompressible and hyperelastic. Material parameters are estimated
for two strain energy functions (SEF). The first is classical
exponential. The second SEF is logarithmic which allows
interpretation by means of limiting (finite) strain extensibility.
Presented material parameters are estimated by optimization based
on radial and axial equilibrium equation in a thick-walled tube. Both
material models fit experimental data successfully. The exponential
model fits significantly better relationship between axial force and
axial strain than logarithmic one.
Abstract: Functional Magnetic Resonance Imaging(fMRI) is a
noninvasive imaging technique that measures the hemodynamic
response related to neural activity in the human brain. Event-related
functional magnetic resonance imaging (efMRI) is a form of
functional Magnetic Resonance Imaging (fMRI) in which a series of
fMRI images are time-locked to a stimulus presentation and averaged
together over many trials. Again an event related potential (ERP) is a
measured brain response that is directly the result of a thought or
perception. Here the neuronal response of human visual cortex in
normal healthy patients have been studied. The patients were asked
to perform a visual three choice reaction task; from the relative
response of each patient corresponding neuronal activity in visual
cortex was imaged. The average number of neurons in the adult
human primary visual cortex, in each hemisphere has been estimated
at around 140 million. Statistical analysis of this experiment was
done with SPM5(Statistical Parametric Mapping version 5) software.
The result shows a robust design of imaging the neuronal activity of
human visual cortex.
Abstract: Mammography is the most effective procedure for an
early diagnosis of the breast cancer. Nowadays, people are trying to
find a way or method to support as much as possible to the
radiologists in diagnosis process. The most popular way is now being
developed is using Computer-Aided Detection (CAD) system to
process the digital mammograms and prompt the suspicious region to
radiologist. In this paper, an automated CAD system for detection
and classification of massive lesions in mammographic images is
presented. The system consists of three processing steps: Regions-Of-
Interest detection, feature extraction and classification. Our CAD
system was evaluated on Mini-MIAS database consisting 322
digitalized mammograms. The CAD system-s performance is
evaluated using Receiver Operating Characteristics (ROC) and Freeresponse
ROC (FROC) curves. The archived results are 3.47 false
positives per image (FPpI) and sensitivity of 85%.
Abstract: In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.