Singular points are important global features of fingerprints and singular point localization is a crucial step in biometric recognition. Moreover the presence and position of the core point in a captured fingerprint sample can reflect whether the finger is placed properly on the sensor. Therefore, the displacement given by detected core points is investigated. We propose pattern-based filters to eliminate the false detection given by state of the art approaches. The experimental results show improvement using different databases. Based on the improved singular point localization algorithm, we explore and analyze the importance of singular points on biometric accuracy. The experiment is based on large scale databases and conducted by relating the measured quality of a fingerprint sample, given by the positions of core points, to the biometric performance. The experimental results show the positions of core points do have influence on the comparison algorithms, but are not as relevant as other benchmarked quality metrics.
Finger knuckle print authentication has been researched not only as a supplemental authentication modality to fingerprint recognition but also as a method for logging into a PC or entering a building. However, in previous works, some specific devices were necessary to capture a finger knuckle print and users had to keep their fingers perfectly still to capture their finger knuckle. In this paper, we propose a new on the fly finger knuckle print authentication system using a general web camera. In our proposed authentication system, users can input their finger knuckle prints without needing their hand to remain motionless during image capture. We also evaluate the authentication accuracy of the proposed system, achieving an 7% EER under best conditions.
Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.
Face recognition from a side profile view, has recently received significant attention in the literature. Even though current face recognition systems have reached a certain level of maturity at angles up to 30 degrees, their success is still limited with side profile angles. This paper presents an efficient technique for the fusion of face profile and ear biometrics. We propose to use a Block-based Local Binary Pattern (LBP) to generate the features for recognition from face profile images and ear images. These feature distributions are then fused at the score level using simple mean rule. Experimental results show that the proposed multimodal system can achieve 97:98% recognition performance, compared to unimodal biometrics of face profile 96.76%, and unimodal biometrics of ear 96.95%, details in the Experimental Results Section. Comparisons with other multimodal systems used in the literature, like Principal Component Analysis (PCA), Full-space Linear Discriminant Analysis (FSLDA) and Kernel Fisher discriminant analysis (KFDA), are presented in the Experimental Results Section.
This paper describes the 3D face recognition algorithm that is based on the hierarchical score-level fusion clas-sifiers. In a simple (unimodal) biometric pipeline, the feature vector is extracted from the input data and subsequently compared with the template stored in the database. In our approachm, we utilize several feature extraction algorithms. We use 6 different image representations of the input 3D face data. Moreover, we are using Gabor and Gauss-Laguerre filter banks applied on the input image data that yield to 12 resulting feature vectors. Each representation is compared with corresponding counterpart from the biometric database. We also add the recognition based on the iso-geodesic curves. The final score-level fusion is performed on 13 comparison scores using the Support Vector Machine (SVM) classifier.
Iris recognition is increasingly being deployed on population wide scales for important applications such as border security, social service administration, criminal identification and general population management. The error rates for this incredibly accurate form of biometric identification are established using well known, laboratory quality datasets. However, it is has long been acknowledged in biometric theory that not all individuals have the same likelihood of being correctly serviced by a biometric system. Typically, techniques for identifying clients that are likely to experience a false non-match or a false match error are carried out on a per-subject basis. This research makes the novel hypothesis that certain ethnical denominations are more or less likely to experience a biometric error. Through established statistical techniques, we demonstrate this hypothesis to be true and document the notable effect that the ethnicity of the client has on iris similarity scores. Understanding the expected impact of ethnical diversity on iris recognition accuracy is crucial to the future success of this technology as it is deployed in areas where the target population consists of clientele from a range of geographic backgrounds, such as border crossings and immigration check points.
This paper presents a template aging study of eye movement biometrics, considering three distinct biometric techniques on multiple stimuli and eye tracking systems. Short-to-midterm aging effects are examined over two-weeks, on a highresolution eye tracking system, and seven-months, on a low-resolution eye tracking system. We find that, in all cases, aging effects are evident as early as two weeks after initial template collection, with an average 28% (±19%) increase in equal error rates and 34% (±12%) reduction in rank-1 identification rates. At seven months, we observe an average 18% (±8%) increase in equal error rates and 44% (±20%) reduction in rank-1 identification rates. The comparative results at two-weeks and seven-months suggests that there is little difference in aging effects between the two intervals; however, whether the rate of decay increases more drastically in the long-term remains to be seen.
The interpretation of thermal imagery can be augmented with information derived from human thermal modeling to better infer human activity during, or prior to, data capture. This additional insight into human activity could prove useful in security and surveillance applications. We have implemented Tanabe’s 65 NM thermocomfort model to predict skin surface temperature under a wide variety of environmental, activity and body parameters. Here, humans are modeled as sixteen segments (head, chest, upper leg, etc.), wherein spherical geometry is assumed for the head and cylindrical geometry is assumed for all other segments. Each segment is comprised of four layers: core, muscle, fat, and skin. Clothing is modeled as an additional layer (or layers) of resistance. Users supply input parameters via our custom MATLAB graphical user interface that includes a robust clothing database based on McCullough’s A Database for Determining the Evaporative Resistance of Clothing, and then Tanabe’s bioheat equations are solved to predict skin temperatures of each body segment. As an initial step of model validation, we compared our computed thermal resistances with literature values. Our evaporative and dry resistance on a per segment basis agreed with literature values. The dry resistance of each segment varied no more than .03 [m2°C/W]. Model validation will be extended to compare the results of our human subject trials (known body parameters, clothing, environmental factors and activity levels) to model outputs. Agreement would further substantiate the propagation of model- predicted skin temperatures through the thermal imager’s transfer function to predict human heat signatures in thermal imagery.
In order to fulfill the potential of fingerprint templates as the basis for authentication schemes, one needs to design a hash function for fingerprints that achieves acceptable matching accuracy and simultaneously has provable security guarantees, especially for parameter regimes that are needed to match fingerprints in practice. While existing matching algorithms can achieve impressive matching accuracy, they have no security guarantees. On the other hand, provable secure hash functions have bad matching accuracy and/or do not guarantee security when parameters are set to practical values. In this work, we present a secure hash function that has the best known tradeoff between security guarantees and matching accuracy. At a high level, our hash function is simple: we apply an off-the shelf hash function on certain collections of minutia points (in particular, triplets of minutia triangles"). However, to realize the potential of this scheme, we have to overcome certain theoretical and practical hurdles. In addition to the novel idea of combining clustering ideas from matching algorithms with ideas from the provable security of hash functions, we also apply an intermediate translation-invariant but rotation-variant map to the minutia points before applying the hash function. This latter idea helps improve the tradeoff between matching accuracy and matching efficiency.
Optical coherence tomography (OCT) has been recently proposed by a number of laboratories as a promising tool for fingerprints acquisitions and for fakes discrimination. Indeed OCT being a non-contact, non-destructive optical method that virtually sections the volume of biological tissues that strongly scatter light it appears obvious to use it for fingerprints. Nevertheless most of the OCT setups have to go through the long acquisition of a full 3D image to isolate an “en-face” image suitable for fingerprint analysis. A few “en-face” OCT approaches have been proposed that use either a complex 2D scanning setup and image processing, or a full-field illumination using a camera and a spatially coherent source that induces crosstalks and degrades the image quality. We show here that Full Field OCT (FFOCT) using a spatially incoherent source is able to provide “en-face” high quality optical sectioning of the fingers skin. Indeed such approach shows a unique spatial resolution able to reveal a number of morphological details of fingerprints that are not seen with competing OCT setups. In particular the cellular structure of the stratum corneum and the epidermis-dermis interface appear clearly. We describe our high-resolution (1 micrometer, isotropic) setup and show our first design to get a large field of view while keeping a good sectioning ability of about 3 micrometers. We display the results obtained using these two setups for fingerprints examination.
Age and gender of an individual, when available, can contribute to identification decisions provided by primary biometrics and help improve matching performance. In this paper, we propose a system which automatically infers age and gender from the fingerprint image. Current approaches for predicting age and gender generally exploit features such as ridge count, and white lines count that are manually extracted. Existing automated approaches have significant limitations in accuracy especially when dealing with data pertaining to elderly females. The model proposed in this paper exploits image quality features synthesized from 40 different frequency bands, and image texture properties captured using the Local Binary Pattern (LBP) and the Local Phase Quantization (LPQ) operators. We evaluate the performance of the proposed approach using fingerprint images collected from 500 users with an optical sensor. The approach achieves prediction accuracy of 89.1% for age and 88.7% for gender.
Spectral imaging technology research is becoming more extensive in the field of examination of material evidence. Near-Infrared spectral imaging technology is an important part of the full spectrum of imaging technology. This paper finished the experiment contents of the Near-Infrared spectrum imaging method and image acquisition system Near-Infrared spectral imaging technology. The experiment of Near-Infrared spectral imaging method obtains the image set of the Near-Infrared spectrum, and formats a pseudo-color images to show the potential traces successfully by processing the set of spectral images; Near-Infrared spectral imaging technology explores the technology method of obtaining the image set of Near-Infrared spectrometer and image acquisition system, and extensive access to the Near-Infrared spectrum information of latent blood, stamp and smear fingerprints on common objects, and study the characteristics of the Near-Infrared spectrum. Near-Infrared spectroscopic imaging experiments explores a wide variety of Near-Infrared reflectance spectra of the object material curve and its Near-Infrared spectrum of imaging modalities, can not only gives a reference for choosing Near-Infrared wavelength to show the object surface potential traces of substances, but also gives important data for the Near-Infrared spectrum of imaging technology development.
The main challenge of facial biometrics is its robustness and ability to adapt to changes in position orientation, facial expression, and illumination effects. This research addresses the predominant deficiencies in this regard and systematically investigates a facial authentication system in the Euclidean domain. In the proposed method, Euclidean geometry in 2D vector space is being constructed for features extraction and the authentication method. In particular, each assigned point of the candidates’ biometric features is considered to be a 2D geometrical coordinate in the Euclidean vector space. Algebraic shapes of the extracted candidate features are also computed and compared. The proposed authentication method is being tested on images from the public “Put Face Database”. The performance of the proposed method is evaluated based on Correct Recognition (CRR), False Acceptance (FAR), and False Rejection (FRR) rates. The theoretical foundation of the proposed method along with the experimental results are also presented in this paper. The experimental results demonstrate the effectiveness of the proposed method.
This paper presents a novel approach to remotely authenticating a user by applying the Vaulted Fingerprint Verification (VFV) protocol. It proposes an adaptation of the Vaulted Verification (VV) concept with fingerprint minutia triangle representation. Over the past decade, triangle features have been used in multiple fingerprint algorithms. Triangles are constructed from three fingerprint minutiae and result in a feature vector that is translation and rotation invariant. In VFV, the user’s minutia triangles are arranged into blocks; each block of triangles is paired with a chaff block. In turn, each real/chaff block is encrypted with a key that is only known to the users. These encrypted block pairs can be used to generate a “challenge” by swapping blocks according to a random bitstring and requiring the remote user to reproduce that exact string. For identity verification, the user creates a new triangle feature vector from his or her fingerprint. This feature vector is matched against each block, which allows the user to identify the “real” block in each pair and recover the bitstring. In this process, individual triangle matching rates are improved by approximate matching on the feature vectors, grouping several feature vectors together, and correcting errors on the final bitstring. This paper presents data on an optimal threshold for approximate matching, the accuracy of triangle matching, the distinguishability between a user’s triangle and a chaff triangle, and the accuracy of the VFV system.
Employing mobile sensor data to recognize user behavioral activities has been well studied in recent years. However, to adopt the data as a biometric modality has rarely been explored. Existing methods either used the data to recognize gait, which is considered as a distinguished identity feature; or segmented a specific kind of motion for user recognition, such as phone picking-up motion. Since the identity and the motion gesture jointly affect motion data, to fix the gesture (walking or phone picking-up) definitively simplifies the identity sensing problem. However, it meanwhile introduces the complexity from gesture detection or requirement on a higher sample rate from motion sensor readings, which may draw the battery fast and affect the usability of the phone. In general, it is still under investigation that motion based user authentication in a large scale satisfies the accuracy requirement as a stand-alone biometrics modality. In this paper, we propose a novel approach to use the motion sensor readings for user identity sensing. Instead of decoupling the user identity from a gesture, we reasonably assume users have their own distinguishing phone usage habits and extract the identity from fuzzy activity patterns, represented by a combination of body movements whose signals in chains span in relative low frequency spectrum and hand movements whose signals span in relative high frequency spectrum. Then Bayesian Rules are applied to analyze the dependency of different frequency components in the signals. During testing, a posterior probability of user identity given the observed chains can be computed for authentication. Tested on an accelerometer dataset with 347 users, our approach has demonstrated the promising results.
This work introduces and evaluates a novel eye movement-driven biometric approach that employs eye fixation density maps for person identification. The proposed feature offers a dynamic representation of the biometric identity, storing rich information regarding the behavioral and physical eye movement characteristics of the individuals. The innate ability of fixation density maps to capture the spatial layout of the eye movements in conjunction with their probabilistic nature makes them a particularly suitable option as an eye movement biometrical trait in cases when free-viewing stimuli is presented. In order to demonstrate the effectiveness of the proposed approach, the method is evaluated on three different datasets containing a wide gamut of stimuli types, such as static images, video and text segments. The obtained results indicate a minimum EER (Equal Error Rate) of 18.3 %, revealing the perspectives on the utilization of fixation density maps as an enhancing biometrical cue during identification scenarios in dynamic visual environments.