Unconventional computer aided imaging approaches, such as bi-spectrum or multi-frame blind deconvolution, have
been used for some time to obtain high resolution images of space based objects from the ground. This "looking up"
imaging scenario is characterized by r0 diameters usually much smaller than the telescope aperture, and isoplanatic
angles on the same order as the object under observation. Air to ground imaging on the other hand, is often
characterized by relatively large r0 values usually about the same size of the imaging aperture, but very small
isoplanatic angles. In such a case, where the scene covers many isoplanatic patches, it has been shown that
improvements in image quality can be obtained by dividing the field of view into sub-regions and applying processing
algorithms to each sub-region. Because r0 is roughly the diameter of the aperture, the primary aberration over a subregion
is tip-tilt. Block matching algorithms that correct only the local tip-tilt over the sub-regions have been
developed to run in near real time. Correcting higher order local aberrations, using a blind deconvolution algorithm,
can lead to improved results but requires increased computation. In this presentation we seek to compare air-to-ground
imagery with different levels of processing; first global tip-tilt only correction; second local tip-tilt correction; and
finally local tip-tilt plus high order correction. The goal is to determine if the increased detail obtained at each step is
worth in increased processing complexity.