We report methodologies for computing high-recall masks for document image content extraction, that is, the
location and segmentation of regions containing handwriting, machine-printed text, photographs, blank space,
etc. The resulting segmentation is pixel-accurate, which accommodates arbitrary zone shapes (not merely rectangles).
We describe experiments showing that iterated classifiers can increase recall of all content types, with
little loss of precision. We also introduce two methodological enhancements: (1) a multi-stage voting rule; and (2)
a scoring policy that views blank pixels as a "don't care" class with other content classes. These enhancements
improve both recall and precision, achieving at least 89% recall and at least 87% precision among three content
types: machine-print, handwriting, and photo.