The modern battlespace is populated with a variety of sensors and sensing modalities. The design and tasking
of a given sensor is therefore increasingly dependent on the performance of other sensors in the mix. The
volume of sensor data is also forcing an increased reliance on sensor data exploitation and content analysis
algorithms (e.g., detecting, labeling, and tracking objects). Effective development and use of interconnected
and algorithmic (i.e., limited human role) sensing processes depends on sensor performance models (e.g., for
offline optimization over design and employment options and for online sensor management and data fusion).
Such models exist in varying forms and fidelities. This paper develops a framework for defining model roles
and describes an assessment process for quantifying fidelity and related properties of models. A key element
of the framework is the explicit treatment of the Operating Conditions (OCs - i.e., target, environment and
sensor properties that affect exploitation performance) that are available for model development, testing data,
and model users. The assessment methodology is a comparison of model and reference performance, but is made
non-trivial by reference limitations (availability for OC distributions of interest) and differences in reference and
model OC representations. A software design of the assessment process is also described. Future papers will
report assessment results for specific models.