Line of sight jitter in staring sensor data combined with scene information can obscure critical information for change
analysis or target detection. Consequently before the data analysis, the jitter effects must be significantly reduced.
Conventional principal component analysis (PCA) has been used to obtain basis vectors for background estimation;
however PCA requires image frames that contain the jitter variation that is to be modeled. Since jitter is usually chaotic
and asymmetric, a data set containing all the variation without the changes to be detected is typically not available. An
alternative approach, Scene Kinetics Mitigation, first obtains an image of the scene. Then it computes derivatives of that
image in the horizontal and vertical directions. The basis set for estimation of the background and the jitter consists of
the image and its derivative factors. This approach has several advantages including: 1) only a small number of images
are required to develop the model, 2) the model can estimate backgrounds with jitter different from the input training
images, 3) the method is particularly effective for sub-pixel jitter, and 4) the model can be developed from images before
the change detection process. In addition the scores from projecting the factors on the background provide estimates of
the jitter magnitude and direction for registration of the images. In this paper we will present a discussion of the
theoretical basis for this technique, provide examples of its application, and discuss its limitations.
We examine the effects of wavelet compression on target detection algorithms when the targets are single-pixel point
sources modulated by the point-spread of an optical system. The experimental data combines frames collected from a
multispectral sensor with simulated targets based on an Airy function. We studied several different types of wavelets
and found that the Daubechies 2 wavelet resulted in the best overall target detection and fewest false alarms with
increasing compression. Results show that wavelet compression may decrease pixel intensities, increase target signal-to-noise
ratio, and reduce false detections. Consequently it may negatively affect target detection unless the detector is
designed to take the decreased intensity into account.