As the Air Force pushes toward reliance on autonomous systems for navigation, situational awareness, threat analysis and target engagement there are several requisite technologies that must be developed. Key among these is the concept of `trust' in the autonomous system to perform its task. This term, `trust' has many application specific definitions. We propose that a properly calibrated algorithm confidence is essential to establishing trust. To accomplish properly calibrated confidence we present a framework for assessing algorithm performance and estimating confidence of a classifier's declaration. This framework has applications to improved algorithm trust, fusion, and diagnostics. We present a metric for comparing the quality of performance modeling and examine three different implementations of performance models on a synthetic dataset over a variety of operating conditions.
|