Much of the early work in robotics focused on developing guaranteed plans for accomplishing tasks specified at a high level. Such task specifications might be of the form `mesh these two gears', or `place part A inside region B'. It is now always possible, however, especially in the realm of assembly planning, to generate guaranteed plans. For example, errors in tolerancing of the parts might render an assembly infeasible. The Error Detection and Recovery (EDR) framework of Donald was developed to deal with these inadequacies of the guaranteed planning framework. EDR strategies will either achieve a goal if it is recognizably reachable, or signal failure. Given a geometrically-specified goal region G, an EDR strategy involves computing a failure region H and a motion plane that will terminate recognizably either in G or H. The question addressed in this work is that of computing sensing strategies for distinguishing which of G and H have been attained. We propose a method with which we can strengthen the guarantee of reaching G or H into a guarantee of recognizability. In particular, we show how to configure a sensor or set of sensors so that a target object in G or in H can be distinguished. Our approach assumes a general sensor model, and builds on algorithms for computing partial visibility maps based on point-to-point visibility between objects in an environment. We characterize recognizability and confusability regions, that is, sensor placement regions from which an object in G or in H can be distinguished, and regions from which attainment of G or H could be confused.