This PDF file contains the front matter associated with SPIE Proceedings Volume 9457, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
Protecting data is a critical part of life in the modern world. The science of protecting data, known as cryptography,
makes use of secret keys to encrypt data in a format that is not easily decipherable. However, most modern cryptography
systems use passwords to perform user authentication. These passwords are a weak link in the security chain, as well as
a common point of attack on cryptography schemes. One alternative to password usage is biometrics: using a person’s
physical characteristics to verify who the person is and unlock the data correspondingly. This study provides a concrete
implementation of the Cambridge biometric cryptosystem. In addition, hardware acceleration has been performed on the
system in order to reduce system runtime and energy usage, which is compared with software-level code optimization.
The experiment takes place on a Xilinx Zynq-7000 All Programmable SoC. Software implementation is run on one of
the embedded ARM A9 cores while hardware implementation makes use of the programmable logic. This has resulted in
an algorithm with strong performance characteristics in both energy usage and runtime.
We consider the problem of generating a biometric image from two different traits. Specifically, we focus on
generating an IrisPrint that inherits its structure from a fingerprint image and an iris image. To facilitate this,
the continuous phase of the fingerprint image, characterizing its ridge flow, is first extracted. Next, a scheme is
developed to extract “minutiae” from an iris image. Finally, an IrisPrint, that resembles a fingerprint, is created
by mixing the ridge flow of the fingerprint with the iris minutiae. Preliminary experiments suggest that the new
biometric image (i.e., IrisPrint) (a) can potentially be used for authentication by an existing fingerprint matcher,
and (b) can potentially conceal and preserve the privacy of the original fingerprint and iris images.
Digital currencies, such as Bitcoin, offer convenience and security to criminals operating in the black marketplace. Some Bitcoin marketplaces, such as Silk Road, even claim anonymity. This claim contradicts the findings in this work, where long term transactional behavior is used to identify and verify account holders. Transaction timestamps and network properties observed over time contribute to this finding. The timestamp of each transaction is the result of many factors: the desire purchase an item, daily schedule and activities, as well as hardware and network latency. Dynamic network properties of the transaction, such as coin flow and the number of edge outputs and inputs, contribute further to reveal account identity. In this paper, we propose a novel methodology for identifying and verifying Bitcoin users based on the observation of Bitcoin transactions over time. The behavior we attempt to quantify roughly occurs in the social band of Newell's time scale. A subset of the Blockchain 230686 is taken, selecting users that initiated between 100 and 1000 unique transactions per month for at least 6 different months. This dataset shows evidence of being nonrandom and nonlinear, thus a dynamical systems approach is taken. Classification and authentication accuracies are obtained under various representations of the monthly Bitcoin samples: outgoing transactions, as well as both outgoing and incoming transactions are considered, along with the timing and dynamic network properties of transaction sequences. The most appropriate representations of monthly Bitcoin samples are proposed. Results show an inherent lack of anonymity by exploiting patterns in long-term transactional behavior.
Cross-band facial recognition is a difficult task, even for the most robust matching algorithms. Inherent factors such as camera effects (blur, noise, and sampling), and variation in pose and illumination, are known to negatively affect algorithm performance. Because cross-band matching algorithms are in the infancy of development, it is currently unclear if their performance is superior to human observers performing this task. In this paper, we present findings from a pilot study aimed at analyzing the ability of an ensemble of human observers to perform the 1:N cross-band facial identification task on degraded facial images, where the probe and gallery images were captured in different spectral bands (visible, SWIR, MWIR and LWIR). Results from our 11-alternative forced choice perception study indicate that: 1) a group of observers familiar with even a subset of subjects in a gallery set are, on average, able to perform the task with higher probability (p > 0.15) than a group of observers with no prior exposure, and 2) task performance for both the familiar and unfamiliar groups increased 1.5-3.4% when matching multi-spectral probe images to galleries of 24-bit color facial images vs. 8-bit monochrome facial images. For the SWIR case, however, we observed a 9.1% increase in performance with 24-bit facial images vs. 8-bit facial images. Results from this study can be leveraged for future work directly comparing cross-band matching performance of humans vs. algorithms.
It has been proven that hamming distance score between frontal and off-angle iris images of same eye differs in iris recognition system. The distinction of hamming distance score is caused by many factors such as image acquisition angle, occlusion, pupil dilation, and limbus effect. In this paper, we first study the effect of the angle variations between iris plane and the image acquisition systems. We present how hamming distance changes for different off-angle iris images even if they are coming from the same iris. We observe that increment in acquisition angle of compared iris images causes the increment in hamming distance. Second, we propose a new technique in off-angle iris recognition system that includes creating a gallery of different off-angle iris images (such as, 0, 10, 20, 30, 40, and 50 degrees) and comparing each probe image with these gallery images. We will show the accuracy of the gallery approach for off-angle iris recognition.
In this work, we study the possibility of indexing color iris images. In the proposed approach, a clustering scheme on a training set of iris images is used to determine cluster centroids that capture the variations in chromaticity of the iris texture. An input iris image is indexed by comparing its pixels against these centroids and determining the dominant clusters - i.e., those clusters to which the majority of its pixels are assigned to. The cluster indices serve as an index code for the input iris image and are used during the search process, when an input probe has to be compared with a gallery of irides. Experiments using multiple color spaces convey the efficacy of the scheme on good quality images, with hit rates closes to 100% being achieved at low penetration rates.
Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning. DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition.1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset.3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art. In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems. Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment. For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline. This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image. This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers.6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset.3, 5 The high accuracy (99.63% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques.3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices. Such an "ImageNet for Face Recognition" would likely receive a warm welcome from researchers and practitioners alike.
Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.
The deployment of fingerprint recognition systems has always raised concerns related to personal privacy. A fingerprint is permanently associated with an individual and, generally, it cannot be reset if compromised in one application. Given that fingerprints are not a secret, potential misuses besides personal recognition represent privacy threats and may lead to public distrust. Privacy mechanisms control access to personal information and limit the likelihood of intrusions. In this paper, image- and feature-level schemes for privacy protection in fingerprint recognition systems are reviewed. Storing only key features of a biometric signature can reduce the likelihood of biometric data being used for unintended purposes. In biometric cryptosystems and biometric-based key release, the biometric component verifies the identity of the user, while the cryptographic key protects the communication channel. Transformation-based approaches only a transformed version of the original biometric signature is stored. Different applications can use different transforms. Matching is performed in the transformed domain which enable the preservation of low error rates. Since such templates do not reveal information about individuals, they are referred to as cancelable templates. A compromised template can be re-issued using a different transform. At image-level, de-identification schemes can remove identifiers disclosed for objectives unrelated to the original purpose, while permitting other authorized uses of personal information. Fingerprint images can be de-identified by, for example, mixing fingerprints or removing gender signature. In both cases, degradation of matching performance is minimized.
A number of approaches for personal authentication using palmprint features have been proposed in the literature, majority of which focus on improving the matching performance. However, of late, preventing potential attacks on biometric systems has become a major concern as more and more biometric systems get deployed for wide range of applications. Among various types of attacks, sensor level attack, commonly known as spoof attack, has emerged as the most common attack due to simplicity in its execution. In this paper, we present an approach for detection of display and print based spoof attacks on palmprint verifcation systems. The approach is based on the analysis of acquired hand images for estimating surface re ectance. First and higher order statistical features computed from the distributions of pixel intensities and sub-band wavelet coeefficients form the feature set. A trained binary classifier utilizes the discriminating information to determine if the acquired image is of real hand or a fake one. Experiments are performed on a publicly available hand image dataset, containing 1300 images corresponding to 230 subjects. Experimental results show that the real hand biometrics samples can be substituted by the fake digital or print copies with an alarming spoof acceptance rate as high as 79.8%. Experimental results also show that the proposed spoof detection approach is very effective for discriminating between real and fake palmprint images. The proposed approach consistently achieves over 99% average 10-fold cross validation classification accuracy in our experiments.
The collection of data from human subjects for biometrics research in the United States requires the development of a data collection protocol that is reviewed by a Human Subjects Institutional Review Board (IRB). The IRB reviews the protocol for risks and approves it if it meets the criteria for approval specified in the relevant Federal regulations (45 CFR 46). Many other countries operate similar mechanisms for the protection of human subjects. IRBs review protocols for safety, confidentiality, and for minimization of risk associated with identity disclosure. Since biometric measurements are potentially identifying, IRB scrutiny of biometrics data collection protocols can be expected to be thorough. This paper discusses the intricacies of IRB best practices within the worldwide biometrics community. This is important because research decisions involving human subjects are made at a local level and do not set a precedent for decisions made by another IRB board. In many cases, what one board approves is not approved by another board, resulting in significant inconsistencies that prove detrimental to both researchers and human subjects. Furthermore, the level of biometrics expertise may be low on IRBs, which can contribute to the unevenness of reviews. This publication will suggest possible best practices for designing and seeking IRB approval for human subjects research involving biometrics measurements. The views expressed are the opinions of the authors.
The concern in this paper is an important category of applications of open-set speaker identification in criminal investigation, which involves operating with short and varied duration speech. The study presents investigations into the adverse effects of such an operating condition on the accuracy of open-set speaker identification, based on both GMMUBM and i-vector approaches. The experiments are conducted using a protocol developed for the identification task, based on the NIST speaker recognition evaluation corpus of 2008. In order to closely cover the real-world operating conditions in the considered application area, the study includes experiments with various combinations of training and testing data duration. The paper details the characteristics of the experimental investigations conducted and provides a thorough analysis of the results obtained.