Last year at this conference we described initial result in the practical implementation of a unified, scientific approach to performance measurement for data fusion algorithms, The proposed approach is based on 'finite-set statistics' (FISST), a generalization of conventional statistics to multisource, multitarget problems. Finite-set statistics makes it possible to directly extend Shannon-type information metrics to multi-source, multitarget problems in such a way that 'information' can be defined and measured even though any given end-user may have conflicting or even subjective definitions of what 'informative' means. In last year's paper, we described scientific performance evaluation for Level 1 data fusion. In this follow-on paper we describe a generalization of the FISST approach to Level 4 data fusion, specifically sensor management. Our Level 4 MoEs are based on the fact that sensor management is a support function: its purpose is to redirect collection assets in order to improve the input data into- and therefore the output performance of a Level 1 fusion algorithm. Accordingly, our basic MoE is 'excess information'. By using a sensor scheduler to simulate various sensor management algorithms, we established the effectiveness and intuitiveness of two different sensor management MoEs: the multitarget Kullback-Leibler information metric, and the Hausdorff multitarget miss-distance metric.