Describable visual attributes are a powerful way to label aspects of an image, and taken together, build a detailed representation of a scene's appearance. Attributes enable highly accurate approaches to a variety of tasks, including object recognition, face recognition and image retrieval. An important consideration not previously addressed in the literature is the reliability of attribute classifiers as the quality of an image degrades. In this paper, we introduce a general framework for conducting reliability studies that assesses attribute classifier accuracy as a function of image degradation. This framework allows us to bound, in a probabilistic manner, the input imagery that is deemed acceptable for consideration by the attribute system without requiring ground truth attribute labels. We introduce a novel differential probabilistic model for accuracy assessment that leverages a strong normalization procedure based on the statistical extreme value theory. To demonstrate the utility of our framework, we present an extensive case study using 64 unique facial attributes, computed on data derived from the Labeled Faces in the Wild (LFW) data set. We also show that such reliability studies can result in significant compression benefits for mobile applications.