Tactical battlefield surveillance systems will require the transmission of compressed video images to utilize these systems limited communication bandwidth and data capacity. The compression techniques used will result in some loss of information.It is important to assess the quality of the video output to determine its performance in Aided Target Recognition applications. The traditional rate of distortion formula is shown by Mallet to be inappropriate for wavelet compression in high compression ratios. The reason is that the histogram change form all gray scale to a concentration singularity near the origin of very low bit rate such that the discrete approximation of the density function of the histogram is no longer valid. Thus we can not theoretically predict the distortion due to wavelet compression. Therefore we conduct an empirical investigation to evaluate the spatial and temporal effects of lossy wavelet compression and reconstruction on tactical IR video. We quantify a resultant temporal ensemble of all local variation curves within a transmitted video frame when compared to the original video frame, using local peak signal-to-noise ratio and feature persistence measure developed by Szu et al. and objective assessment techniques developed by the Institute for Telecommunication Sciences, US Department of Commerce to asses video impairment. We therefore measure video degradation rather than absolute video quality which is difficult to quantify. We also evaluate the comparison results in a movie using split screen presentation of the original and reconstructed video frames and their corresponding metric performance to enhance visual inspection of motion cues enhancement of edge texture maps.