Tactical battlefield surveillance systems will require the transmission of compressed video to utilize the limited communication bandwidth and data capacity of these systems. Any compression techniques used will result in some loss of information. It is important to assess the quality of the output video to determine its performance in aided target recognition applications. The traditional rate of distortion formula is shown by Mallet [S. Mallet, "Understanding wavelet image compression," Proc. SPIE Wavelet Apps. IV 3078, 74–93 (April 1997); "A theory for multiresolution signal decomposition: The wavelet representation," IEEE Trans. Pattern Anal. Mach. Intell. 11, 674–693 (1989)] to be inappropriate for wavelet compression in high compression ratios. The reason is that the histogram changes from all gray scale to a concentration singularity near the origin of very low bit rate such that the discrete approximation of the density function of the histogram is no longer valid. Thus we cannot theoretically predict the distortion due to wavelet compression. Therefore we conduct an empirical investigation to evaluate the spatial and temporal effects of lossy wavelet compression and reconstruction on tactical infrared video. We quantify localized peak signal-to-noise ratio and feature persistence measure measurements and objective assessment techniques developed by the Institute for Telecommunication Sciences, U.S.
Department of Commerce, to assess video impairment based on quality measurements. We therefore measure video degradation rather than absolute video quality, which is difficult to quantify.