Describable visual attributes are a powerful way to label aspects of an image, and taken together, build a detailed representation of a scene's appearance. Attributes enable highly accurate approaches to a variety of tasks, including object recognition, face recognition and image retrieval. An important consideration not previously addressed in the literature is the reliability of attribute classifiers as the quality of an image degrades. In this paper, we introduce a general framework for conducting reliability studies that assesses attribute classifier accuracy as a function of image degradation. This framework allows us to bound, in a probabilistic manner, the input imagery that is deemed acceptable for consideration by the attribute system without requiring ground truth attribute labels. We introduce a novel differential probabilistic model for accuracy assessment that leverages a strong normalization procedure based on the statistical extreme value theory. To demonstrate the utility of our framework, we present an extensive case study using 64 unique facial attributes, computed on data derived from the Labeled Faces in the Wild (LFW) data set. We also show that such reliability studies can result in significant compression benefits for mobile applications.
As the use of biometrics becomes more wide-spread, the privacy concerns that stem from the use of biometrics are becoming more apparent. As the usage of mobile devices grows, so does the desire to implement biometric identification into such devices. A large majority of mobile devices being used are mobile phones. While work is being done to implement different types of biometrics into mobile phones, such as photo based biometrics, voice is a more natural choice. The idea of voice as a biometric identifier has been around a long time. One of the major concerns with using voice as an identifier is the instability of voice. We have developed a protocol that addresses those instabilities and preserves privacy. This paper describes a novel protocol that allows a user to authenticate using voice on a mobile/remote device without compromising their privacy. We first discuss the Vaulted Verification protocol, which has recently been introduced in research literature, and then describe its limitations. We then introduce a novel adaptation and extension of the Vaulted Verification protocol to voice, dubbed Vaulted Voice Verification (V3). Following that we show a performance evaluation and then conclude with a discussion of security and future work.
The issues of applying facial recognition at significant distances are non-trivial and often subtle. This paper summarizes 7 years of effort on Face at a distance, which for us is far more than a fad. Our effort started under the DARPA Human Identification at a Distance (HID) program. Of all the programmers under HID, only a few of the efforts demonstrated face recognition at greater than 25ft and only one, lead by Dr. Boult, studied face recognition at distances greater than 50 meters. Two issues were explicitly studied. The first was atmospherics/weather, which can have a measurable impact at these distances. The second area was sensor issues including resolution, field-of-view and dynamic range. This paper starts with a discussion and some of results in sensors related issues including resolution, FOV, dynamic range and lighting normalization. It then discusses the "Photohead" technique developed to analyze the impact of weather/imaging and atmospherics at medium distances. The paper presents experimental results showing the limitations of existing systems at significant distance and under non-ideal weather conditions and presents some reasons for the weak performance. It ends with a discussion of our FASSTTM (failure prediction from similarity surface theory) and RandomEyesTM approaches, combined into the FIINDERTM system and how they improved FAAD.
Proceedings Volume Editor (3)
This will count as one of your downloads.
You will have access to both the presentation and article (if available).