Abstract: As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.
Abstract: With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.
Abstract: For electrocardiogram (ECG) biometrics system, it is a tedious process to pre-install user’s high-intensity heart rate (HR) templates in ECG biometric systems. Based on only resting enrollment templates, it is a challenge to identify human by using ECG with the high-intensity HR caused from exercises and stress. This research provides a heartbeat segment method with slope-oriented neural networks against the ECG morphology changes due to high intensity HRs. The method has overall system accuracy at 97.73% which includes six levels of HR intensities. A cumulative match characteristic curve is also used to compare with other traditional ECG biometric methods.
Abstract: The inherent skin patterns created at the joints in the
finger exterior are referred as finger knuckle-print. It is exploited to
identify a person in a unique manner because the finger knuckle print
is greatly affluent in textures. In biometric system, the region of
interest is utilized for the feature extraction algorithm. In this paper,
local and global features are extracted separately. Fast Discrete
Orthonormal Stockwell Transform is exploited to extract the local
features. Global feature is attained by escalating the size of Fast
Discrete Orthonormal Stockwell Transform to infinity. Two features
are fused to increase the recognition accuracy. A matching distance is
calculated for both the features individually. Then two distances are
merged mutually to acquire the final matching distance. The
proposed scheme gives the better performance in terms of equal error
rate and correct recognition rate.
Abstract: Recent growth in digital multimedia technologies has presented a lot of facilities in information transmission, reproduction and manipulation. Therefore, the concept of information security is one of the superior articles in the present day situation. The biometric information security is one of the information security mechanisms. It has the advantages as well as disadvantages. The biometric system is at risk to a range of attacks. These attacks are anticipated to bypass the security system or to suspend the normal functioning. Various hazards have been discovered while using biometric system. Proper use of steganography greatly reduces the risks in biometric systems from the hackers. Steganography is one of the fashionable information hiding technique. The goal of steganography is to hide information inside a cover medium like text, image, audio, video etc. through which it is not possible to detect the existence of the secret information. Here in this paper a new security concept has been established by making the system more secure with the help of steganography along with biometric security. Here the biometric information has been embedded to a skin tone portion of an image with the help of proposed steganographic technique.
Abstract: Fragile watermarking has been proposed as a means
of adding additional security or functionality to biometric systems,
particularly for authentication and tamper detection. In this paper
we describe an experimental study on the effect of watermarking
iris images with a particular class of fragile algorithm, reversible
algorithms, and the ability to correctly perform iris recognition.
We investigate two scenarios, matching watermarked images
to unmodified images, and matching watermarked images to
watermarked images. We show that different watermarking schemes
give very different results for a given capacity, highlighting the
importance ofinvestigation. At high embedding rates most algorithms
cause significant reduction in recognition performance. However,
in many cases, for low embedding rates, recognition accuracy is
improved by the watermarking process.
Abstract: The multimodal biometric identification is the combination of several biometric systems; the challenge of this combination is to reduce some limitations of systems based on a single modality while significantly improving performance. In this paper, we propose a new approach to the construction and the protection of a multimodal biometric database dedicated to an identification system. We use a topological watermarking to hide the relation between face image and the registered descriptors extracted from other modalities of the same person for more secure user identification.
Abstract: Multimodal biometric systems integrate the data presented by multiple biometric sources, hence offering a better performance than the systems based on a single biometric modality. Although the coupling of biometric systems can be done at different levels, the fusion at the scores level is the most common since it has been proven effective than the rest of the fusion levels. However, the scores from different modalities are generally heterogeneous. A step of normalizing the scores is needed to transform these scores into a common domain before combining them. In this paper, we study the performance of several normalization techniques with various fusion methods in a context relating to the merger of three unimodal systems based on the face, the palmprint and the fingerprint. We also propose a new adaptive normalization method that takes into account the distribution of client scores and impostor scores. Experiments conducted on a database of 100 people show that the performances of a multimodal system depend on the choice of the normalization method and the fusion technique. The proposed normalization method has given the best results.
Abstract: Iris-based biometric system is gaining its importance in several applications. However, processing of iris biometric is a challenging and time consuming task. Detection of iris part in an eye image poses a number of challenges such as, inferior image quality, occlusion of eyelids and eyelashes etc. Due to these problems it is not possible to achieve 100% accuracy rate in any iris-based biometric authentication systems. Further, iris detection is a computationally intensive task in the overall iris biometric processing. In this paper, we address these two problems and propose a technique to localize iris part efficiently and accurately. We propose scaling and color level transform followed by thresholding, finding pupil boundary points for pupil boundary detection and dilation, thresholding, vertical edge detection and removal of unnecessary edges present in the eye images for iris boundary detection. Scaling reduces the search space significantly and intensity level transform is helpful for image thresholding. Experimental results show that our approach is comparable with the existing approaches. Following our approach it is possible to detect iris part with 95-99% accuracy as substantiated by our experiments on CASIA Ver-3.0, ICE 2005, UBIRIS, Bath and MMU iris image databases.
Abstract: Prior research evidenced that unimodal biometric
systems have several tradeoffs like noisy data, intra-class variations,
restricted degrees of freedom, non-universality, spoof attacks, and
unacceptable error rates. In order for the biometric system to be more
secure and to provide high performance accuracy, more than one
form of biometrics are required. Hence, the need arise for multimodal
biometrics using combinations of different biometric modalities. This
paper introduces a multimodal biometric system (MMBS) based on
fusion of whole dorsal hand geometry and fingerprints that acquires
right and left (Rt/Lt) near-infra-red (NIR) dorsal hand geometry (HG)
shape and (Rt/Lt) index and ring fingerprints (FP). Database of 100
volunteers were acquired using the designed prototype. The acquired
images were found to have good quality for all features and patterns
extraction to all modalities. HG features based on the hand shape
anatomical landmarks were extracted. Robust and fast algorithms for
FP minutia points feature extraction and matching were used. Feature
vectors that belong to similar biometric traits were fused using
feature fusion methodologies. Scores obtained from different
biometric trait matchers were fused using the Min-Max
transformation-based score fusion technique. Final normalized scores
were merged using the sum of scores method to obtain a single
decision about the personal identity based on multiple independent
sources. High individuality of the fused traits and user acceptability
of the designed system along with its experimental high performance
biometric measures showed that this MMBS can be considered for
med-high security levels biometric identification purposes.
Abstract: Fingerprint based identification system; one of a well
known biometric system in the area of pattern recognition and has
always been under study through its important role in forensic
science that could help government criminal justice community. In
this paper, we proposed an identification framework of individuals by
means of fingerprint. Different from the most conventional
fingerprint identification frameworks the extracted Geometrical
element features (GEFs) will go through a Discretization process.
The intention of Discretization in this study is to attain individual
unique features that could reflect the individual varianceness in order
to discriminate one person from another. Previously, Discretization
has been shown a particularly efficient identification on English
handwriting with accuracy of 99.9% and on discrimination of twins-
handwriting with accuracy of 98%. Due to its high discriminative
power, this method is adopted into this framework as an independent
based method to seek for the accuracy of fingerprint identification.
Finally the experimental result shows that the accuracy rate of
identification of the proposed system using Discretization is 100%
for FVC2000, 93% for FVC2002 and 89.7% for FVC2004 which is
much better than the conventional or the existing fingerprint
identification system (72% for FVC2000, 26% for FVC2002 and
32.8% for FVC2004). The result indicates that Discretization
approach manages to boost up the classification effectively, and
therefore prove to be suitable for other biometric features besides
handwriting and fingerprint.
Abstract: Functioning of a biometric system in large part
depends on the performance of the similarity measure function.
Frequently a generalized similarity distance measure function such as
Euclidian distance or Mahalanobis distance is applied to the task of
matching biometric feature vectors. However, often accuracy of a
biometric system can be greatly improved by designing a customized
matching algorithm optimized for a particular biometric application.
In this paper we propose a tailored similarity measure function for
behavioral biometric systems based on the expert knowledge of the
feature level data in the domain. We compare performance of a
proposed matching algorithm to that of other well known similarity
distance functions and demonstrate its superiority with respect to the
chosen domain.
Abstract: In this paper, a novel method for a biometric system based on the ECG signal is proposed, using spectral coefficients computed through linear predictive coding (LPC). ECG biometric systems have traditionally incorporated characteristics of fiducial points of the ECG signal as the feature set. These systems have been shown to contain loopholes and thus a non-fiducial system allows for tighter security. In the proposed system, incorporating non-fiducial features from the LPC spectrum produced a segment and subject recognition rate of 99.52% and 100% respectively. The recognition rates outperformed the biometric system that is based on the wavelet packet decomposition (WPD) algorithm in terms of recognition rates and computation time. This allows for LPC to be used in a practical ECG biometric system that requires fast, stringent and accurate recognition.
Abstract: Heart sound is an acoustic signal and many techniques
used nowadays for human recognition tasks borrow speech recognition
techniques. One popular choice for feature extraction of accoustic
signals is the Mel Frequency Cepstral Coefficients (MFCC) which
maps the signal onto a non-linear Mel-Scale that mimics the human
hearing. However the Mel-Scale is almost linear in the frequency
region of heart sounds and thus should produce similar results with
the standard cepstral coefficients (CC). In this paper, MFCC is
investigated to see if it produces superior results for PCG based
human identification system compared to CC. Results show that the
MFCC system is still superior to CC despite linear filter-banks in
the lower frequency range, giving up to 95% correct recognition rate
for MFCC and 90% for CC. Further experiments show that the high
recognition rate is due to the implementation of filter-banks and not
from Mel-Scaling.
Abstract: The quest of providing more secure identification
system has led to a rise in developing biometric systems. Dorsal
hand vein pattern is an emerging biometric which has attracted the
attention of many researchers, of late. Different approaches have
been used to extract the vein pattern and match them. In this work,
Principle Component Analysis (PCA) which is a method that has
been successfully applied on human faces and hand geometry is
applied on the dorsal hand vein pattern. PCA has been used to obtain
eigenveins which is a low dimensional representation of vein pattern
features. Low cost CCD cameras were used to obtain the vein
images. The extraction of the vein pattern was obtained by applying
morphology. We have applied noise reduction filters to enhance the
vein patterns. The system has been successfully tested on a database
of 200 images using a threshold value of 0.9. The results obtained are
encouraging.
Abstract: Single biometric modality recognition is not able to meet the high performance supplies in most cases with its application become more and more broadly. Multimodal biometrics identification represents an emerging trend recently. This paper investigates a novel algorithm based on fusion of both fingerprint and fingervein biometrics. For both biometric recognition, we employ the Monogenic Local Binary Pattern (MonoLBP). This operator integrate the orginal LBP (Local Binary Pattern ) with both other rotation invariant measures: local phase and local surface type. Experimental results confirm that a weighted sum based proposed fusion achieves excellent identification performances opposite unimodal biometric systems. The AUC of proposed approach based on combining the two modalities has very close to unity (0.93).
Abstract: A wide spectrum of systems require reliable
personal recognition schemes to either confirm or determine the
identity of an individual person. This paper considers multimodal
biometric system and their applicability to access control,
authentication and security applications. Strategies for feature
extraction and sensor fusion are considered and contrasted. Issues
related to performance assessment, deployment and standardization
are discussed. Finally future directions of biometric systems
development are discussed.
Abstract: This paper discusses the effectiveness of the EEG signal
for human identification using four or less of channels of two different
types of EEG recordings. Studies have shown that the EEG signal
has biometric potential because signal varies from person to person
and impossible to replicate and steal. Data were collected from 10
male subjects while resting with eyes open and eyes closed in 5
separate sessions conducted over a course of two weeks. Features
were extracted using the wavelet packet decomposition and analyzed
to obtain the feature vectors. Subsequently, the neural networks
algorithm was used to classify the feature vectors. Results show that,
whether or not the subjects- eyes were open are insignificant for a 4–
channel biometrics system with a classification rate of 81%. However,
for a 2–channel system, the P4 channel should not be included if data
is acquired with the subjects- eyes open. It was observed that for 2–
channel system using only the C3 and C4 channels, a classification
rate of 71% was achieved.
Abstract: Iris localization is a very important approach in
biometric identification systems. Identification process usually is
implemented in three levels: iris localization, feature extraction, and
pattern matching finally. Accuracy of iris localization as the first step
affects all other levels and this shows the importance of iris
localization in an iris based biometric system. In this paper, we
consider Daugman iris localization method as a standard method,
propose a new method in this field and then analyze and compare the
results of them on a standard set of iris images. The proposed method
is based on the detection of circular edge of iris, and improved by
fuzzy circles and surface energy difference contexts. Implementation
of this method is so easy and compared to the other methods, have a
rather high accuracy and speed. Test results show that the accuracy of
our proposed method is about Daugman method and computation
speed of it is 10 times faster.
Abstract: Electronic commerce is growing rapidly with on-line
sales already heading for hundreds of billion dollars per year. Due to
the huge amount of money transferred everyday, an increased
security level is required. In this work we present the architecture of
an intelligent speaker verification system, which is able to accurately
verify the registered users of an e-commerce service using only their
voices as an input. According to the proposed architecture, a
transaction-based e-commerce application should be complemented
by a biometric server where customer-s unique set of speech models
(voiceprint) is stored. The verification procedure requests from the
user to pronounce a personalized sequence of digits and after
capturing speech and extracting voice features at the client side are
sent back to the biometric server. The biometric server uses pattern
recognition to decide whether the received features match the stored
voiceprint of the customer who claims to be, and accordingly grants
verification. The proposed architecture can provide e-commerce
applications with a higher degree of certainty regarding the identity
of a customer, and prevent impostors to execute fraudulent
transactions.