Abstract: In this paper we consider quantum motion integrals
depended on the algebraic reconstruction of BPHZ method for
perturbative renormalization in two different procedures. Then based
on Bogoliubov character and Baker-Campbell-Hausdorff (BCH) formula,
we show that how motion integral condition on components
of Birkhoff factorization of a Feynman rules character on Connes-
Kreimer Hopf algebra of rooted trees can determine a family of fixed
point equations.
Abstract: This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.
Abstract: In this paper, a novel system
recognition of human faces without using face
different color photographs is proposed. It mainly in
face detection, normalization and recognition. Foot
method of combination of Haar-like face determined
segmentation and region-based histogram stretchi
(RHST) is proposed to achieve more accurate perf
using Haar. Apart from an effective angle norm
side-face (pose) normalization, which is almost a might be important and beneficial for the prepr
introduced. Then histogram-based and photom
normalization methods are investigated and ada
retinex (ASR) is selected for its satisfactory illumin
Finally, weighted multi-block local binary pattern
with 3 distance measures is applied for pair-mat
Experimental results show its advantageous perfo
with PCA and multi-block LBP, based on a principle.
Abstract: In this paper we present an approach for 3D face
recognition based on extracting principal components of range
images by utilizing modified PCA methods namely 2DPCA and
bidirectional 2DPCA also known as (2D) 2 PCA.A preprocessing
stage was implemented on the images to smooth them using median
and Gaussian filtering. In the normalization stage we locate the nose
tip to lay it at the center of images then crop each image to a standard
size of 100*100. In the face recognition stage we extract the principal
component of each image using both 2DPCA and (2D) 2 PCA.
Finally, we use Euclidean distance to measure the minimum distance
between a given test image to the training images in the database. We
also compare the result of using both methods. The best result
achieved by experiments on a public face database shows that 83.3
percent is the rate of face recognition for a random facial expression.
Abstract: Digital watermarking has become an important technique for copyright protection but its robustness against attacks remains a major problem. In this paper, we propose a normalizationbased robust image watermarking scheme. In the proposed scheme, original host image is first normalized to a standard form. Zernike transform is then applied to the normalized image to calculate Zernike moments. Dither modulation is adopted to quantize the magnitudes of Zernike moments according to the watermark bit stream. The watermark extracting method is a blind method. Security analysis and false alarm analysis are then performed. The quality degradation of watermarked image caused by the embedded watermark is visually transparent. Experimental results show that the proposed scheme has very high robustness against various image processing operations and geometric attacks.
Abstract: In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.
Abstract: In modern human computer interaction systems
(HCI), emotion recognition is becoming an imperative characteristic.
The quest for effective and reliable emotion recognition in HCI has
resulted in a need for better face detection, feature extraction and
classification. In this paper we present results of feature space analysis
after briefly explaining our fully automatic vision based emotion
recognition method. We demonstrate the compactness of the feature
space and show how the 2d/3d based method achieves superior features
for the purpose of emotion classification. Also it is exposed that
through feature normalization a widely person independent feature
space is created. As a consequence, the classifier architecture has
only a minor influence on the classification result. This is particularly
elucidated with the help of confusion matrices. For this purpose
advanced classification algorithms, such as Support Vector Machines
and Artificial Neural Networks are employed, as well as the simple k-
Nearest Neighbor classifier.
Abstract: Robust face recognition under various illumination
environments is very difficult and needs to be accomplished for
successful commercialization. In this paper, we propose an improved
illumination normalization method for face recognition. Illumination
normalization algorithm based on anisotropic smoothing is well known
to be effective among illumination normalization methods but
deteriorates the intensity contrast of the original image, and incurs less
sharp edges. The proposed method in this paper improves the previous
anisotropic smoothing-based illumination normalization method so
that it increases the intensity contrast and enhances the edges while
diminishing the effect of illumination variations. Due to the result of
these improvements, face images preprocessed by the proposed
illumination normalization method becomes to have more distinctive
feature vectors (Gabor feature vectors) for face recognition. Through
experiments of face recognition based on Gabor feature vector
similarity, the effectiveness of the proposed illumination
normalization method is verified.
Abstract: In this paper spatial variability of some chemical and
physical soil properties were investigated in mountain rangelands of
Nesho, Mazandaran province, Iran. 110 soil samples from 0-30 cm
depth were taken with systematic method on grid 30×30 m2 in
regions with different vegetation cover and transported to laboratory.
Then soil chemical and physical parameters including Acidity (pH),
Electrical conductivity, Caco3, Bulk density, Particle density, total
phosphorus, total Nitrogen, available potassium, Organic matter,
Saturation moisture, Soil texture (percentage of sand, silt and clay),
Sodium, Calcium, magnesium were measured in laboratory. Data
normalization was performed then was done statistical analysis for
description of soil properties and geostatistical analysis for indication
spatial correlation between these properties and were perpetrated
maps of spatial distribution of soil properties using Kriging method.
Results indicated that in the study area Saturation moisture and
percentage of Sand had highest and lowest spatial correlation
respectively.
Abstract: This paper propose the robust character segmentation method for license plate with topological transform such as twist,rotation. The first step of the proposed method is to find a candidate region for character and license plate. The character or license plate
must be appeared as closed loop in the edge image. In the case of
detecting candidate for character region, the evaluation of detected
region is using topological relationship between each character. When
this method decides license plate candidate region, character features
in the region with binarization are used. After binarization for the detected candidate region, each character region is decided again. In
this step, each character region is fitted more than previous step. In the
next step, the method checks other character regions with different
scale near the detected character regions, because most license plates
have license numbers with some meaningful characters around them.
The method uses perspective projection for geometrical normalization.
If there is topological distortion in the character region, the method
projects the region on a template which is defined as standard license
plate using perspective projection. In this step, the method is able to
separate each number region and small meaningful characters. The
evaluation results are tested with a number of test images.
Abstract: The research focuses on the effects of polyphenols
extracted from Sambucus nigra fruit, using an experimental arterial
hypertension pattern, as well as their influence on the oxidative
stress. The results reveal the normalization of the reduced glutathion
concentration, as well as a considerable reduction in the
malondialdehide serum concentration by the polyphenolic protection.
The rat blood pressure values were recorded using a CODATM
system, which uses a non-invasive blood pressure measuring method.
All the measured blood pressure components revealed a biostatistically
significant (p
Abstract: On-line handwritten scripts are usually dealt with pen tip traces from pen-down to pen-up positions. Time evaluation of the pen coordinates is also considered along with trajectory information. However, the data obtained needs a lot of preprocessing including filtering, smoothing, slant removing and size normalization before recognition process. Instead of doing such lengthy preprocessing, this paper presents a simple approach to extract the useful character information. This work evaluates the use of the counter- propagation neural network (CPN) and presents feature extraction mechanism in full detail to work with on-line handwriting recognition. The obtained recognition rates were 60% to 94% using the CPN for different sets of character samples. This paper also describes a performance study in which a recognition mechanism with multiple thresholds is evaluated for counter-propagation architecture. The results indicate that the application of multiple thresholds has significant effect on recognition mechanism. The method is applicable for off-line character recognition as well. The technique is tested for upper-case English alphabets for a number of different styles from different peoples.
Abstract: Quality of 2D and 3D cross-sectional images produce
by Computed Tomography primarily depend upon the degree of
precision of primary and secondary X-Ray intensity detection.
Traditional method of primary intensity detection is apt to errors.
Recently the X-Ray intensity measurement system along with smart
X-Ray sensors is developed by our group which is able to detect
primary X-Ray intensity unerringly. In this study a new smart X-Ray
sensor is developed using Light-to-Frequency converter TSL230
from Texas Instruments which has numerous advantages in terms of
noiseless data acquisition and transmission. TSL230 construction is
based on a silicon photodiode which converts incoming X-Ray
radiation into the proportional current signal. A current to frequency
converter is attached to this photodiode on a single monolithic CMOS
integrated circuit which provides proportional frequency count to
incoming current signal in the form of the pulse train. The frequency
count is delivered to the center of PICDEM FS USB board with
PIC18F4550 microcontroller mounted on it. With highly compact
electronic hardware, this Demo Board efficiently read the smart
sensor output data. The frequency output approaches overcome
nonlinear behavior of sensors with analog output thus un-attenuated
X-Ray intensities could be measured precisely and better
normalization could be acquired in order to attain high resolution.
Abstract: Images of human iris contain specular highlights due
to the reflective properties of the cornea. This corneal reflection
causes many errors not only in iris and pupil center estimation but
also to locate iris and pupil boundaries especially for methods that
use active contour. Each iris recognition system has four steps:
Segmentation, Normalization, Encoding and Matching. In order to
address the corneal reflection, a novel reflection removal method is
proposed in this paper. Comparative experiments of two existing
methods for reflection removal method are evaluated on CASIA iris
image databases V3. The experimental results reveal that the
proposed algorithm provides higher performance in reflection
removal.
Abstract: The number of features required to represent an image
can be very huge. Using all available features to recognize objects
can suffer from curse dimensionality. Feature selection and
extraction is the pre-processing step of image mining. Main issues in
analyzing images is the effective identification of features and
another one is extracting them. The mining problem that has been
focused is the grouping of features for different shapes. Experiments
have been conducted by using shape outline as the features. Shape
outline readings are put through normalization and dimensionality
reduction process using an eigenvector based method to produce a
new set of readings. After this pre-processing step data will be
grouped through their shapes. Through statistical analysis, these
readings together with peak measures a robust classification and
recognition process is achieved. Tests showed that the suggested
methods are able to automatically recognize objects through their
shapes. Finally, experiments also demonstrate the system invariance
to rotation, translation, scale, reflection and to a small degree of
distortion.
Abstract: The join dependency provides the basis for obtaining
lossless join decomposition in a classical relational schema. The
existence of Join dependency shows that that the tables always
represent the correct data after being joined. Since the classical
relational databases cannot handle imprecise data, they were
extended to fuzzy relational databases so that uncertain, ambiguous,
imprecise and partially known information can also be stored in
databases in a formal way. However like classical databases, the
fuzzy relational databases also undergoes decomposition during
normalization, the issue of joining the decomposed fuzzy relations
remains intact. Our effort in the present paper is to emphasize on this
issue. In this paper we define fuzzy join dependency in the
framework of type-1 fuzzy relational databases & type-2 fuzzy
relational databases using the concept of fuzzy equality which is
defined using fuzzy functions. We use the fuzzy equi-join operator
for computing the fuzzy equality of two attribute values. We also
discuss the dependency preservation property on execution of this
fuzzy equi- join and derive the necessary condition for the fuzzy
functional dependencies to be preserved on joining the decomposed
fuzzy relations. We also derive the conditions for fuzzy join
dependency to exist in context of both type-1 and type-2 fuzzy
relational databases. We find that unlike the classical relational
databases even the existence of a trivial join dependency does not
ensure lossless join decomposition in type-2 fuzzy relational
databases. Finally we derive the conditions for the fuzzy equality to
be non zero and the qualification of an attribute for fuzzy key.
Abstract: In this paper we present an efficient system for
independent speaker speech recognition based on neural network
approach. The proposed architecture comprises two phases: a
preprocessing phase which consists in segmental normalization and
features extraction and a classification phase which uses neural
networks based on nonparametric density estimation namely the
general regression neural network (GRNN). The relative
performances of the proposed model are compared to the similar
recognition systems based on the Multilayer Perceptron (MLP), the
Recurrent Neural Network (RNN) and the well known Discrete
Hidden Markov Model (HMM-VQ) that we have achieved also.
Experimental results obtained with Arabic digits have shown that the
use of nonparametric density estimation with an appropriate
smoothing factor (spread) improves the generalization power of the
neural network. The word error rate (WER) is reduced significantly
over the baseline HMM method. GRNN computation is a successful
alternative to the other neural network and DHMM.