Abstract: A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.
Abstract: A novel algorithm for construct a seamless video mosaic of the entire panorama continuously by automatically analyzing and managing feature points, including management of quantity and quality, from the sequence is presented. Since a video contains significant redundancy, so that not all consecutive video images are required to create a mosaic. Only some key images need to be selected. Meanwhile, feature-based methods for mosaicing rely on correction of feature points? correspondence deeply, and if the key images have large frame interval, the mosaic will often be interrupted by the scarcity of corresponding feature points. A unique character of the method is its ability to handle all the problems above in video mosaicing. Experiments have been performed under various conditions, the results show that our method could achieve fast and accurate video mosaic construction. Keywords?video mosaic, feature points management, homography estimation.
Abstract: In this paper, we propose an improved fast search
algorithm using combined histogram features and temporal division
method for short MPEG video clips from large video database. There
are two types of histogram features used to generate more robust
features. The first one is based on the adjacent pixel intensity
difference quantization (APIDQ) algorithm, which had been reliably
applied to human face recognition previously. An APIDQ histogram is
utilized as the feature vector of the frame image. Another one is
ordinal feature which is robust to color distortion. Combined with
active search [4], a temporal pruning algorithm, fast and robust video
search can be realized. The proposed search algorithm has been
evaluated by 6 hours of video to search for given 200 MPEG video
clips which each length is 30 seconds. Experimental results show the
proposed algorithm can detect the similar video clip in merely 120ms,
and Equal Error Rate (ERR) of 1% is achieved, which is more
accurately and robust than conventional fast video search algorithm.
Abstract: It is important for an autonomous mobile robot to know
where it is in any time in an indoor environment. In this paper, we
design a relative self-localization algorithm. The algorithm compare
the interest point in two images and compute the relative displacement
and orientation to determent the posture. Firstly, we use the SURF
algorithm to extract the interest points of the ceiling. Second, in order
to reduce amount of calculation, a replacement SURF is used to extract
orientation and description of the interest points. At last, according to
the transformation of the interest points in two images, the relative
self-localization of the mobile robot will be estimated greatly.
Abstract: An algorithm for estimating the disparity of objects of
interest is proposed. This algorithm uses image shifting and
overlapping area to estimate the disparity value; thereby depth of the
objects of interest can be obtained. The algorithm is able to perform
at different levels of accuracy. However, as the accuracy increases
the processing speed decreases. The algorithm is tested with static
stereo images and sequence of stereo images. The experimental
results are presented in this paper.
Abstract: We present a genetic algorithm application to the problem of object registration (i.e., object detection, localization and recognition) in a class of medical images containing various types of blood cells. The genetic algorithm approach taken here is seen to be most appropriate for this type of image, due to the characteristics of the objects. Successful cell registration results on real life microscope images of blood cells show the potential of the proposed approach.
Abstract: This paper presents a VLSI design approach of a highspeed
and real-time 2-D Discrete Wavelet Transform computing. The
proposed architecture, based on new and fast convolution approach,
reduces the hardware complexity in addition to reduce the critical
path to the multiplier delay. Furthermore, an advanced twodimensional
(2-D) discrete wavelet transform (DWT)
implementation, with an efficient memory area, is designed to
produce one output in every clock cycle. As a result, a very highspeed
is attained. The system is verified, using JPEG2000
coefficients filters, on Xilinx Virtex-II Field Programmable Gate
Array (FPGA) device without accessing any external memory. The
resulting computing rate is up to 270 M samples/s and the (9,7) 2-D
wavelet filter uses only 18 kb of memory (16 kb of first-in-first-out
memory) with 256×256 image size. In this way, the developed design
requests reduced memory and provide very high-speed processing as
well as high PSNR quality.
Abstract: This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.
Abstract: Utilizing echoic intension and distribution from different organs and local details of human body, ultrasonic image can catch important medical pathological changes, which unfortunately may be affected by ultrasonic speckle noise. A feature preserving ultrasonic image denoising and edge enhancement scheme is put forth, which includes two terms: anisotropic diffusion and edge enhancement, controlled by the optimum smoothing time. In this scheme, the anisotropic diffusion is governed by the local coordinate transformation and the first and the second order normal derivatives of the image, while the edge enhancement is done by the hyperbolic tangent function. Experiments on real ultrasonic images indicate effective preservation of edges, local details and ultrasonic echoic bright strips on denoising by our scheme.
Abstract: This paper presents a novel method for data hiding based on neighborhood pixels information to calculate the number of bits that can be used for substitution and modified Least Significant Bits technique for data embedding. The modified solution is independent of the nature of the data to be hidden and gives correct results along with un-noticeable image degradation. The technique, to find the number of bits that can be used for data hiding, uses the green component of the image as it is less sensitive to human eye and thus it is totally impossible for human eye to predict whether the image is encrypted or not. The application further encrypts the data using a custom designed algorithm before embedding bits into image for further security. The overall process consists of three main modules namely embedding, encryption and extraction cm.
Abstract: The appearance management behavior of tanning by gay men is examined through the lens of Impression Formation. The study proposes that body image, self-esteem, and internalized homophobia are connected and affect the motives for engaging in sun, salon, and cosmetic tanning. Motives examined were: to look masculine, to look attractive to (potential) partners, to look attractive in general, to socialize, to meet a peer standard, and for personal satisfaction. Using regression analysis to examine data of 103 gay men who engage in at least one method of tanning, results reveal that components of body image and internalized homophobia–but not self-esteem–are linked to various motives and methods of tanning. These findings support and extend the literature of Impression Formation Theory and provide practitioners in the health and healthrelated fields new avenues to pursue when dealing with diseases related to tanning.
Abstract: Noise contamination in a magnetic resonance (MR)
image could occur during acquisition, storage, and transmission in
which effective filtering is required to avoid repeating the MR
procedure. In this paper, an iterative asymmetrical triangle fuzzy
filter with moving average center (ATMAVi filter) is used to reduce
different levels of salt and pepper noise in a brain MR image. Besides
visual inspection on filtered images, the mean squared error (MSE) is
used as an objective measurement. When compared with the median
filter, simulation results indicate that the ATMAVi filter is effective
especially for filtering a higher level noise (such as noise density =
0.45) using a smaller window size (such as 3x3) when operated
iteratively or using a larger window size (such as 5x5) when operated
non-iteratively.
Abstract: In this study, we propose a tongue diagnosis method
which detects the tongue from face image and divides the tongue area into six areas, and finally generates tongue coating ratio of each area.
To detect the tongue area from face image, we use ASM as one of the active shape models. Detected tongue area is divided into six areas
widely used in the Korean traditional medicine and the distribution of tongue coating of the six areas is examined by SVM(Support Vector
Machine). For SVM, we use a 3-dimensional vector calculated by PCA(Principal Component Analysis) from a 12-dimentional vector
consisting of RGB, HIS, Lab, and Luv. As a result, we detected the tongue area stably using ASM and found that PCA and SVM helped
raise the ratio of tongue coating detection.
Abstract: In this paper, we construct and implement a new
Steganography algorithm based on learning system to hide a large
amount of information into color BMP image. We have used adaptive
image filtering and adaptive non-uniform image segmentation with
bits replacement on the appropriate pixels. These pixels are selected
randomly rather than sequentially by using new concept defined by
main cases with sub cases for each byte in one pixel. According to
the steps of design, we have been concluded 16 main cases with their
sub cases that covere all aspects of the input information into color
bitmap image. High security layers have been proposed through four
layers of security to make it difficult to break the encryption of the
input information and confuse steganalysis too. Learning system has
been introduces at the fourth layer of security through neural
network. This layer is used to increase the difficulties of the statistical
attacks. Our results against statistical and visual attacks are discussed
before and after using the learning system and we make comparison
with the previous Steganography algorithm. We show that our
algorithm can embed efficiently a large amount of information that
has been reached to 75% of the image size (replace 18 bits for each
pixel as a maximum) with high quality of the output.
Abstract: In this paper we present a new approach to detecting a
flaw in T.O.F.D (Time Of Flight Diffraction) type ultrasonic image
based on texture features. Texture is one of the most important
features used in recognizing patterns in an image. The paper
describes texture features based on 2D Gabor functions, i.e.,
Gaussian shaped band-pass filters, with dyadic treatment of the radial
spatial frequency range and multiple orientations, which represent an
appropriate choice for tasks requiring simultaneous measurement in
both space and frequency domains. The most relevant features are
used as input data on a Fuzzy c-mean clustering classifier. The
classes that exist are only two: 'defects' or 'no defects'. The proposed
approach is tested on the T.O.F.D image achieved at the laboratory
and on the industrial field.
Abstract: For gamma radiation detection, assemblies having
scintillation crystals and a photomultiplier tube, also there is a
preamplifier connected to the detector because the signals from
photomultiplier tube are of small amplitude. After pre-amplification
the signals are sent to the amplifier and then to the multichannel
analyser. The multichannel analyser sorts all incoming electrical
signals according to their amplitudes and sorts the detected photons
in channels covering small energy intervals. The energy range of
each channel depends on the gain settings of the multichannel
analyser and the high voltage across the photomultiplier tube. The
exit spectrum data of the two main isotopes studied ,putting data in
biomass program ,process it by Matlab program to get the solid
holdup image (solid spherical nuclear fuel)
Abstract: The article examines the methods of protection of
citizens' personal data on the Internet using biometric identity
authentication technology. It`s celebrated their potential danger due
to the threat of loss of base biometric templates. To eliminate the
threat of compromised biometric templates is proposed to use neural
networks large and extra-large sizes, which will on the one hand
securely (Highly reliable) to authenticate a person by his biometrics,
and on the other hand make biometrics a person is not available for
observation and understanding. This article also describes in detail
the transformation of personal biometric data access code. It`s formed
the requirements for biometrics converter code for his work with the
images of "Insider," "Stranger", all the "Strangers". It`s analyzed the
effect of the dimension of neural networks on the quality of
converters mystery of biometrics in access code.
Abstract: Breast carcinoma is the most common form of cancer
in women. Multicolour fluorescent in-situ hybridisation (m-FISH) is
a common method for staging breast carcinoma. The interpretation
of m-FISH images is complicated due to two effects: (i) Spectral
overlap in the emission spectra of fluorochrome marked DNA probes
and (ii) tissue autofluorescence. In this paper hyper-spectral images of
m-FISH samples are used and spectral unmixing is applied to produce
false colour images with higher contrast and better information
content than standard RGB images. The spectral unmixing is realised
by combinations of: Orthogonal Projection Analysis (OPA), Alterating
Least Squares (ALS), Simple-to-use Interactive Self-Modeling
Mixture Analysis (SIMPLISMA) and VARIMAX. These are applied
on the data to reduce tissue autofluorescence and resolve the spectral
overlap in the emission spectra. The results show that spectral unmixing
methods reduce the intensity caused by tissue autofluorescence by
up to 78% and enhance image contrast by algorithmically reducing
the overlap of the emission spectra.
Abstract: In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.
Abstract: Cluster analysis is the name given to a diverse collection of techniques that can be used to classify objects (e.g. individuals, quadrats, species etc). While Kohonen's Self-Organizing Feature Map (SOFM) or Self-Organizing Map (SOM) networks have been successfully applied as a classification tool to various problem domains, including speech recognition, image data compression, image or character recognition, robot control and medical diagnosis, its potential as a robust substitute for clustering analysis remains relatively unresearched. SOM networks combine competitive learning with dimensionality reduction by smoothing the clusters with respect to an a priori grid and provide a powerful tool for data visualization. In this paper, SOM is used for creating a toroidal mapping of two-dimensional lattice to perform cluster analysis on results of a chemical analysis of wines produced in the same region in Italy but derived from three different cultivators, referred to as the “wine recognition data" located in the University of California-Irvine database. The results are encouraging and it is believed that SOM would make an appealing and powerful decision-support system tool for clustering tasks and for data visualization.