PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper considers the effect of using different probe and gallery sensors on the performance of 3D face recognition. We report results of recognition experiments using face scans of 120 different persons, taken with two different commercial scanners, each at two different times. Our matching algorithm is a version of ICP, which is a popular approach to 3D face recognition. We find substantial differences in recognition rate between the sensors considered in part due to the different types of imaging artifacts produced. When matching data across sensors, the higher-quality data should be the enrollment data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and
orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition
system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a systematic study of face recognition performance as a function of light level using intensified near infrared imagery. This technology is the most prevalent in both civilian and military night vision equipment, and provides enough intensification for human operators to perform standard tasks under extremely low-light conditions. We describe a comprehensive data collection effort undertaken by the authors to image subjects under carefully controlled illumination and quantify the performance of standard face recognition algorithms on visible and intensified imagery as a function of light level. Performance comparisons for automatic face recognition are reported using the standardized implementations from the CSU Face Identification Evaluation System. The results contained in this paper should constitute the initial step for analysis and deployment of face recognition systems designed to work in low-light level conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face recognition performance has always been affected by the different facial expressions a subject may display. In this paper, we present an extension to the UR3D face recognition algorithm, which enables us to decrease the discrepancy in its performance for datasets from subjects with and without a neutral facial expression, from 15% to 3%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new method for face verification for vision applications. There are many approaches to detect and track a face in a sequence of images; however, the high computations of image algorithms, as well as, face detection and head tracking failures under unrestricted environments remain to be a difficult problem. We present a robust algorithm that improves face detection and tracking in video sequences by using geometrical facial information and a recurrent neural network verifier. Two types of neural networks are proposed for face detection verification. A new method, a three-face reference model (TFRM), and its advantages, such as, allowing for a better match for face verification, will be discussed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Face Recognition Grand Challenge (FRGC) dataset is one of the most challenging datasets in the face recognition community, in this dataset we focus on the hardest experiment under the harsh un-controlled conditions. In this paper we compare how other popular face recognition algorithms like Direct Linear Discriminant Analysis (D-LDA) and Gram-Schmidt LDA methods compare to traditional eigenfaces, and fisherfaces. However, we also show that all these linear subspace methods can not discriminate faces well due to large nonlinear distortions in the face images. Thus we present our proposed Class dependence Feature Analysis (CFA) method which we demonstrate to produce superior performance compared to other methods by representing nonlinear features well. We perform this by extending the traditional CFA framework to use Kernel Methods and propose a proper choice of kernel parameters which improves the overall face recognition performance is significantly over the competing face recognition algorithms. We present results of this proposed approach on a large scale database from the Face Recognition Grand Challenge (FRGC)v2 which contains over 36,000 images focusing on Experiment 4 which poses the harshest scenario containing images captured under un-controlled indoor and outdoor conditions yielding significant illumination variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fingerprint datasets in many cases may exceed millions of samples. Thus, the needed size of a biometric evaluation test sample is an important issue in terms of both efficiency and accuracy. In this article, an empirical study, namely, using Chebyshev's inequality in combination with simple random sampling, is applied to determine the sample size for biometric applications. No parametric model is assumed, since the underlying distribution functions of the similarity scores are unknown. The performance of fingerprint-image matcher is measured by a Receiver Operating Characteristic (ROC) curve. Both the area under an ROC curve and the True Accept Rate (TAR) at an operational False Accept Rate (FAR) are employed. The Chebyshev's greater-than-95% intervals of using these two criteria based on 500 Monte Carlo iterations are computed for different sample sizes as well as for both high- and low-quality fingerprint-image matchers. The stability of such Monte Carlo calculations with respect to the number of iterations is also explored. The choice of sample size depends on matchers' qualities as well as on which performance criterion is invoked. In general, for 6,000 match similarity scores, 50,000 to 70,000 scores randomly selected from 35,994,000 non-match similarity scores can ensure the accuracy with greater-than-95% probability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingerprint mosaicing entails the reconciliation of information presented by two or more impressions of a finger in order to generate composite information. It can be accomplished by blending these impressions into a single mosaic, or by integrating the feature sets (viz., minutiae information) pertaining to these impressions. In this work, we use Thin-plate Splines (TPS) to model the relative transformation between two impressions of a finger
thereby accounting for the non-linear distortion present between them. The estimated deformation is used (a) to register the two images and blend them into a single entity before extracting minutiae from the resulting mosaic (image mosaicing); and (b) to register the minutiae point sets corresponding to the two images and
integrate them into a single master minutiae set (feature mosaicing). Experiments conducted on the FVC 2002 DB1 database indicate that both mosaicing schemes result in improved matching performance although feature mosaicing is observed to outperform image mosaicing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fingerprint verification task answers the question of whether or not two fingerprints belongs to the same finger. The paper focuses on the classification aspect of fingerprint verification. Classification is the third and final step after after the two earlier steps of feature extraction, where a known set of features (minutiae points) have been extracted from each fingerprint, and scoring, where a matcher has determined a degree of match between the two sets of features. Since this is a binary classification problem involving a single variable, the commonly used threshold method is related to the so-called receiver operating characteristics (ROC). In the ROC approach the optimal threshold on the score is determined so as to determine match or non-match. Such a method works well when there is a well-registered fingerprint image. On the other hand more sophisticated methods are needed when there exists a partial imprint of a finger- as in the case of latent prints in forensics or due to limitations of the biometric device. In such situations it is useful to consider classification methods based on computing the likelihood ratio of match/non-match. Such methods are commonly used in some biometric and forensic domains such as speaker verification where there is a much higher degree of uncertainty. This paper compares the two approaches empirically for the fingerprint classification task when the number of available minutiae are varied. In both ROC-based and likelihood ratio methods, learning is from a general population of ensemble of pairs, each of which is labeled as being from the same finger or from different fingers. In the ROC-based method the best operating point is derived from the ROC curve. In the likelihood method the distributions of same finger and different finger scores are modeled using Gaussian and Gamma distributions. The performances of the two methods are compared for varying numbers of minutiae points available. Results show that the likelihood method performs better than the ROC-based method when fewer minutiae points are available. Both methods converge to the same accuracy as more minutiae points are available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingerprint scanners can be spoofed by fake fingers using moldable plastic, clay, Play-Doh, wax or gelatin. Liveness detection is an anti-spoofing method which can detect physiological signs of life from fingerprints to ensure only live fingers can be captured for enrollment or authentication. Our laboratory has demonstrated that the time-varying perspiration pattern can be used as a measure to detect liveness for fingerprint systems. Unlike spoof or cadaver fingers, live fingers have a distinctive spatial perspiration phenomenon both statically and dynamically. In this paper, a new
intensity based approach is presented which quantifies the grey level differences using histogram distribution statistics and finds distinct differences between live and non-live fingerprint images. Based on these static and dynamic features, we generate the decision rules to perform liveness classification. These methods were tested on optical, capacitive DC and electro-optical scanners using a dataset of about 58 live fingerprints, 50 spoof (made from Play-Doh and Gelatin) and 25 cadaver fingerprints. The dataset was divided into three sets: training set, validation set and test set. The training set was used to generate the classification tree model while the best tree model was decided by the validation set.
Finally, the test set was used to estimate the performance. The results are compared with the former ridge signal algorithm with new extracted features. The outcome shows that the intensity based approach and ridge signal approach can extract simple features which perform with excellent classification (about 90%~100%) for some scanners using a classification tree. The proposed liveness detection methods are purely software based, efficient and easy to be implemented for commercial use. Application of these methods can provide anti-spoofing protection for fingerprint scanners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometric sensor interoperability refers to the ability of a system to compensate for the variability introduced in the biometric data of an individual due to the deployment of different sensors. We demonstrate that a simple non-linear calibration scheme, based on Thin Plate Splines (TPS), is sufficient to facilitate sensor interoperability in the context of fingerprints. In the proposed technique, the variation between the images acquired using two
different sensors is modeled using non-linear distortions. Experiments indicate that the proposed calibration scheme can significantly improve inter-sensor matching performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of fingerprints as a biometric is both the oldest mode of computer aided personal identification and the most relied-upon technology in use today. But current fingerprint scanning systems have some challenging and peculiar difficulties. Often skin conditions and imperfect acquisition circumstances cause the captured fingerprint image to be far from ideal. Also some of the acquisition techniques can be slow and cumbersome to use and may not provide the complete information required for reliable feature extraction and fingerprint matching. Most of the difficulties arise due to the contact of the fingerprint surface with the sensor platen. To attain a fast-capture, non-contact, fingerprint scanning technology, we are developing a scanning system that employs structured light illumination as a means for acquiring a 3-D scan of the finger with sufficiently high resolution to record ridge-level details. In this paper, we describe the postprocessing steps used for converting the acquired 3-D scan of the subject's finger into a 2-D rolled equivalent image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is the most reliable biometric in terms of recognition and identification performance. However, performance of these systems is affected by poor quality imaging. In this work, we extend previous research efforts on iris quality assessment by analyzing the effect of seven quality factors: defocus blur, motion blur, off-angle, occlusion, specular reflection,
lighting, and pixel-counts on the performance of traditional iris recognition system. We have concluded that defocus blur, motion blur, and off-angle are the factors that affect recognition performance the most. We further designed a fully automated iris image quality evaluation block that operates in two steps. First each factor is estimated individually, then the second step involves fusing the estimated factors by using Dempster-Shafer theory approach to evidential reasoning. The designed block is tested on two datasets, CASIA 1.0 and a dataset collected at WVU. Considerable improvement in recognition performance is demonstrated when removing poor quality images evaluated by our quality metric. The upper bound on processing complexity required to evaluate quality of a single image is O(n2 log n), that of a 2D-Fast Fourier Transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Like many visual patterns, captured images from the same iris biometric experience relative nonlinear deformations and partial occlusions. These distortions are difficult to normalize for when comparing iris images for match evaluation. We define a probabilistic framework in which an iris image pair constitute observed variables, while parameters of relative deformation and occlusion constitute unobserved latent variables. The relation between these variables are specified in a graphical model, allowing maximum a posteriori probability (MAP) approximate inference in order to estimate the value of the hidden states. To define the generative probability of the observed iris patterns, we rely on the similarity values produced by correlation filter outputs. As a result, we are able to develop an algorithm which returns a robust match metric at the end of the estimation process and works reasonably quickly. We show recognition results on two sets of real iris images: the CASIA database, collected by the Chinese Academy of Sciences, and a database collected by the authors at Carnegie Mellon University.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The iris is the colored ring of tissue behind the cornea and in front of the lens. The pattern within the iris is very unique to each person, and even the left eye is unique from the right eye. Compared with other biometric features (such face, voice, and etc), the iris is more stable and reliable. This paper proposes a new approach to iris recognition using 2D Log-Gabor spatial filters. Gabor filter has the disadvantage of DC component whenever the bandwidth is larger than one octave. The Log Gabor function removes the DC component. While the 1D Log-Gabor filter captures only the horizontal patterns, the 2D approach can capture the two dimensional characteristics of the iris patterns. Preliminary experimental results show that the proposed method has an encouraging performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The human iris is an attractive biometric due to its high discrimination capability. However, capturing good quality images of human irises is challenging and requires considerable user cooperation. Iris capture systems with large depth of field, large field of view and excellent capacity for light capture can help considerably in such scenarios. In this paper we apply Wavefront Coding to increase the depth of field without increasing the optical F/# of an iris recognition system when the subject is at least 2 meters away. This computational imaging system is designed and optimized using the spectral-SNR as the fundamental metric. We present simulation and experimental results that show the benefits of this technology for biometric identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris recognition has been demonstrated to be an efficient
technology for doing personal identification. Performance of iris
recognition system depends on the isolation of the iris region
from rest of the eye image. In this work, effective use of active
shape models (ASMs) for doing iris segmentation is demonstrated. A
method for building flexible model by learning patterns of iris
invariability from a well organized training set is described. The
specific approach taken in the work sacrifices generality, in
order to accommodate better iris segmentation. The algorithm was
initially applied on the on-angle, noise free CASIA data base and
then was extended to the off-axis iris images collected at WVU eye
center. A direct comparison with canny iris segmentation in terms
of error rates, demonstrate effectiveness of ASM segmentation. For
the selected threshold value of 0.4, FAR and FRR values were
0.13% and 0.09% using canny detectors and 0% each using
the proposed ASM based method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A face recognition system consists of two integrated parts: One is the face recognition algorithm, the other is the selected classifier and derived features by the algorithm from a data set. The face recognition algorithm definitely plays a central role, but this paper does not aim at evaluating the algorithm, but deriving the best features for this algorithm from a specific database through sampling design of the training set, which directs how the sample should be collected and dictates the sample space. Sampling design can help exert the full potential of the face recognition algorithm without overhaul. Conventional statistical analysis usually assume some distribution to draw the inference, but the design-based inference does not assume any distribution of the data and it does not assume the independency between the sample observations. The simulations illustrates that the systematic sampling scheme performs better than the simple random sampling scheme, and the systematic sampling is comparable to using all available training images in recognition performance. Meanwhile the sampling schemes can save the system resources and alleviate the overfitting problem. However, the post stratification by sex is not shown to be significant in improving the recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we measure the effect of the lighting direction in facial images on the performance of 2 well-known face recognition algorithms, an appearance based method and a facial feature based method. We collect hundreds/thousands of facial images of subjects with a fixed pose and under different lighting conditions through a unique facial acquisition laboratory designed specifically for this purpose. Then we present a methodology for automatically detecting the lighting direction of different face images based on statistics derived from the image. We also detect if there is any glare regions in some lighting directions. Finally we determine the most reliable lighting direction that will lead to a good quality/high performance facial image from both techniques based on our experiments with the acquired data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In real life scenario, we may be interested in face recognition for identification purpose when we only got sketch of the face images, for example, when police tries to identify criminals based on sketches of suspect, which is drawn by artists according to description of witnesses, what they have in hand is a sketch of suspects, and many real face image acquired from video surveillance. So far the state-of-the-art approach toward this problem tries to transform all real face images into sketches and perform recognition on sketch domain. In our approach we propose the opposite which is a better approach; we propose to generate a realistic face image from the composite sketch using a Hybrid subspace method and then build an illumination tolerant correlation filter which can recognize the person under different illumination variations. We show experimental results on our approach on the CMU PIE (Pose Illumination and Expression) database on the effectiveness of our novel approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of biometric systems in physical access scenarios is gaining popularity. In such scenarios, users are enroled under well controlled conditions and the enrolment is usually indoors. To gain access to the building, the user provides his biometric samples in an outdoor environment over which there is little control. This adversely affects the quality of the samples and as a result the system performance is sub-optimal. This study evaluates the performance of a multimodal biometric system in a physical access scenario. We evaluate leading commercial algorithms on an indoor-outdoor, multimodal database comprising of face and voice samples. The indoor-outdoor nature of the database and the choice of modalities results in individual systems performing poorly. Popular normalization and fusion techniques are used to improve the performance of the overall system. Multimodal fusion results in an average improvement of approximately 20% at 1% false acceptance rate over individual modalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on recent works showing the feasibility of key generation using biometrics, we study the application of handwritten signature to cryptography. Our signature-based key generation scheme implements the cryptographic construction named fuzzy vault. The use of distinctive signature features suited for the fuzzy vault is discussed and evaluated. Experimental results are reported, including error rates to unlock the secret data by using both random and skilled forgeries from the MCYT database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article we test a number of score fusion methods for the purpose of multimodal biometric authentication. These tests were made for the SecurePhone project, whose aim is to develop a prototype mobile communication system enabling biometrically authenticated users to deal legally binding m-contracts during a mobile phone call on a PDA. The three biometrics of voice, face and signature were selected because they are all traditional non-intrusive and easy to use means of authentication which can readily be captured on a PDA. By combining multiple biometrics of relatively low security it may be possible to obtain a combined level of security which is at least as high as that provided by a PIN or handwritten signature, traditionally used for user authentication. As the relative success of different fusion methods depends on the database used and tests made, the database we used was recorded on a suitable PDA (the Qtek2020) and the test protocol was designed to reflect the intended application scenario, which is expected to use short text prompts. Not all of the fusion methods tested are original. They were selected for their suitability for implementation within the constraints imposed by the application. All of the methods tested are based on fusion of the match scores output by each modality. Though computationally simple, the methods tested have shown very promising results. All of the 4 fusion methods tested obtain a significant performance increase.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A palmprint can be represented using different features and the different representations reflect the different characteristic of a palmprint. Fusion of multiple palmprint features may enhance the performance of a palmprint authentication system. This paper investigates the fusion of two types of palmprint information: the phase (called PalmCode) and the orientation (called OrientationCode). The PalmCode is extracted using the 2-D Gabor filters based algorithm and the OrientationCode is computed using several directional templates. Then several fusion strategies are investigated and compared. The experimental results show that the fusion of the PalmCode and OrientationCode using the Product, Sum and Weighted Sum strategies can greatly improve the accuracy of palmprint authentication, which is up to 99.6%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Behavior based intrusion detection is a frequently used approach for insuring network security. We expend behavior based intrusion detection approach to a new domain of game networks. Specifically, our research shows that a unique behavioral biometric can be generated based on the strategy used by an individual to play a game. We wrote software capable of automatically extracting behavioral profiles for each player in a game of Poker. Once a behavioral signature is generated for a player, it is continuously compared against player's current actions. Any significant deviations in behavior are reported to the game server administrator as potential security breaches. Our algorithm addresses a well-known problem of user verification and can be re-applied to the fields beyond game networks, such as operating systems and non-game networks security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In biometric based authentication, biometric traits of a person are matched against his/her stored biometric profile and access is granted if there is sufficient match. However, there are other access scenarios, which require participation of multiple previously registered users for a successful authentication or to get an access
grant for a certain entity. For instance, there are cryptographic constructs generally known as secret sharing schemes, where a secret is split into shares and distributed amongst participants in such a way that it is reconstructed/revealed only when the necessary number of share holders come together. The revealed secret can then
be used for encryption or authentication (if the revealed key is verified against the previously registered value). In this work we propose a method for the biometric based secret sharing. Instead of splitting a secret amongst participants, as is done in cryptography, a single biometric construct is created using the biometric traits of the participants. During authentication, a valid cryptographic key is released out of the construct when the required number of genuine participants present their biometric traits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Different hand-derived biometric traits have been used for user authentication in many commercial systems. In this paper we have investigated the possibility of using a new biometric trait, the knuckle, for user authentication. Knuckle regions are extracted from the hand images and correlation methods are used for the purpose of verification. Experimental results on a data set of 125 people show that the knuckle is a viable biometric trait, which can be used as an alternative to finger and palm prints or in conjunction with them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes the multimodal biometrics system for identity verification using four traits i.e., face, fingerprint, iris and signature. The proposed system is designed for applications where the training database contains a face, iris, two fingerprint images and/or one or two signature image(s) for each individual. The final decision is made by fusion at "matching score level architecture" in which feature vectors are created independently for query images and are then compared to the enrollment templates which are stored during database preparation for each biometric trait. Based on the proximity of feature vector and template, each subsystem computes its own matching score. These individual scores are finally combined into a total score, which is passed to the decision module. Multimodal system is developed through fusion of face, fingerprint, iris and signature recognition. This system is tested on IITK database and the overall accuracy of the system is found to be more than 97% accurate with FAR and FRR of 2.46% and 1.23% respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, Lee et al. and Lin-Lai proposed fingerprint-based remote user authentication schemes using smart cards. We demonstrate that their schemes are vulnerable and susceptible to the attack and have practical pitfalls. Their schemes perform only unilateral authentication (only client authentication) and there is no mutual authentication between user and remote system, so their schemes suscept from the server spoofing attack. To overcome the flaw, we present a strong remote user authentication scheme by using fingerprint-biometric and smart cards. The proposed scheme is an extended and generalized form of ElGamal's signature scheme whose security is based on discrete logarithm problem, which is not yet forged. Proposed scheme not only overcome drawbacks and problems of previous schemes, but also provide a strong authentication of remote users over insecure network. In addition, computational costs and efficiency of the proposed scheme are better than other related schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.