Recently, a method referred to as aperture partitioning was suggested to improve imagery of space objects collected through strong turbulence. In these situations the ratio of aperture diameter to atmospheric coherence diameters is large but, otherwise, images are reconstructed over a single isoplanatic patch size. Here, aperture partitioning has been shown to improve reconstruction quality in speckle imaging by reducing redundancy among imaging baselines. In this work, we explore the possibility of using aperture partitioning in horizontal imaging scenarios where the ratio of aperture to coherence cell size is small but imagery is highly anisoplanatic.
Telescope images of astronomical objects and man-made satellites are frequently characterized by high dynamic range
and low SNR. We consider the problem of how to enhance these images, with the aim of making them visually useful
rather than radiometrically accurate. Standard contrast and histogram adjustment tends to strongly amplify noise in dark
regions of the image. Sophisticated techniques have been developed to address this problem in the context of natural
scenes. However, these techniques often misbehave when confronted with low-SNR scenes that are also mostly empty
space. We compare two classes of algorithms: contrast-limited adaptive histogram equalization, which achieves spatial
localization via a tiling of the image, and gradient-domain techniques, which perform localized contrast adjustment by
non-linearly remapping the gradient of the image in a content-dependent manner. We extend these to include a priori
knowledge of SNR and the processing (e.g. deconvolution) that was applied in the preparation of the image. The
methods will be illustrated with images of satellites from a ground-based telescope.
Many imaging modalities measure magnitudes of Fourier components of an object. Given such data, reconstruction of an image from data that is also noisy and sparse is especially challenging, as may occur in some forms of intensity interferometry, Fourier telescopy, and speckle imaging. In such measurements, the Fourier magnitudes must be positive, and moreover must be less than 1 given the usual normalization, scaling the magnitudes so that the magnitude is one at zero spatial frequency in the u-v plane data. The Cramér-Rao formalism is applied to single Fourier magnitude measurements to ascertain whether a reduction in variance is possible given these constraints. An extension of the Cramér-Rao formalism is used to address the value of relatively general prior information. The impact of this knowledge is also shown for simulated image formation for a simple disk, with varying measurement SNR and sampling in the (u,v) plane.
Speckle imaging techniques make it possible to do high-resolution imaging through the turbulent atmosphere by collecting
and processing a large number of short-exposure frames, each of which effectively freezes the atmosphere. In severe seeing
conditions, when the characteristic scale of atmospheric fluctuations is much smaller than the diameter of the telescope,
the reconstructed image is dominated by "turbulence noise" caused by redundant baselines in the pupil. I describe a
generalization of aperture masking interferometry that dramatically improves imaging performance in this regime. The
approach is to partition the aperture into annuli, form the bispectra of the focal plane images formed from each annulus,
and recombine them into a synthesized bispectrum from which the object may be retrieved. This may be implemented
using multiple cameras and special mirrors, or with a single camera and a suitable pupil phase mask. I report results from
simulations as well as experimental results using telescopes at the Air Force Research Lab's Maui Space Surveillance Site.
An image reconstruction approach is developed that makes joint use of image sequences produced by a conventional imaging channel and a Shack-Hartmann (lenslet) channel. Iterative maximization techniques are used to determine the reconstructed object that is most consistent with both the conventional and Shack-Hartmann raw pixel-level data. The algorithm is analogous to phase diversity, but with the wavefront diversity provided by a lenslet array rather than a simple defocus. The log-likelihood cost function is matched to the Poisson statistics of the signal and Gaussian statistics of the detector noise. Addition of a cost term that encourages the estimated object to agree with a priori knowledge of an ensemble averaged power spectrum regularizes the reconstruction. Techniques for modeling FPA sampling are developed that are convenient for performing both the forward simulation and the gradient calculations needed for the iterative maximization. The model is computationally efficient and accurately addresses all aspects of the Shack-Hartmann sensor, including subaperture cross-talk, FPA aliasing, and geometries in which the number of pixels across a subaperture is not an integer. The performance of this approach is compared with multi-frame blind deconvolution and phase diversity using simulations of image sequences produced by the visible band GEMINI sensor on the AMOS 1.6 meter telescope. It is demonstrated that wavefront information provided by the second channel improves image reconstruction by avoiding the wavefront ambiguities associated with multiframe blind deconvolution and to a lesser degree, phase diversity.
We explore the problem of reconstructing a 3-d model of a convex object from unresolved time-series photometric measurements (i.e., lightcurves). The problem is broken into three steps. First, the lightcurves are used to recover the albedo-area density of the object as a function of the surface normal. The ill-posedness of this inversion is considered and a suitable regularization scheme proposed. Second, the albedo and area contributions are separated using either transits or additional measurements at different wavelengths. Finally, the Minkowski problem is solved to produce the 3-dimensional shape corresponding to the area density.
Atmospheric turbulence effects greatly reduce the resolution that can be obtained by systems that must form images through the atmosphere. Postdetection image reconstruction techniques, such as speckle imaging, and deconvolution from wavefront sensing provide a means of overcoming some of these effects by postprocessing sets of short-exposure image measurements. Previous work has shown that using image quality metrics to select the best subset of an ensemble of measured images to process can yield better results than processing all the measured data. In this paper we extend this idea to select a subset of frames using metrics derived from wavefront sensor (WFS) measurements made simultaneously with the image measurements. This approach to using WFS data may allow the amount of data that is saved to be reduced, or automate the process of sifting the data for the best subsets to process. Our results indicate that the WFS-based metrics are consistent with the image-quality-based metrics.
Image restoration algorithms compensate for blur induced attenuation of frequency components that correspond to fine
scale image features. However, for Fourier spatial frequency components with low signal to noise ratio, noise
amplification outweighs the benefit of compensation and regularization methods are required. This paper investigates a
generalization of the Wiener filter approach developed as a maximum a priori estimator based on statistical expectations
of the object power spectrum. The estimate is also required to agree with physical properties of the system, specifically
object positivity and Poisson noise statistics. These additional requirements preclude a closed form expression. Instead,
the solution is determined by an iterative approach. Incorporation of the additional constraints results in significant
improvement in the mean square error and in visual interpretability. Equally important, it is shown that the performance
has weak sensitivity to the weight of the prior over a large range of SNR values, blur strengths, and object morphology,
greatly facilitating practical use in an operational environment.
KEYWORDS: Signal to noise ratio, Point spread functions, Error analysis, Image restoration, Computer programming, Monte Carlo methods, Image quality, Deconvolution, Reconstruction algorithms, Image quality standards
It is well known that positivity constraints improve the performance of image reconstruction procedures such as deconvolution. However, their impact on the recovered image is more difficult to characterize than linear constraints such as support. For the problem of deconvolution in the presence of additive Gaussian noise, we derive an approximation to the bias and variance of the maximum likelihood estimator and compare the improvement in mean-square error due to positivity with the gain derived from support constraints. Then we propose a generalized Bayes estimator and demonstrate that it has lower mean-square error in most cases than the maximum likelihood estimator. The degree to which it outperforms maximum likelihood is especially dramatic when SNR is low or blurring is strong.
Ideally phase diversity determines the object and wavefront that are consistent with two images taken identically except that the wavefront of the diversity channel is perturbed by a known additive aberration. In practice other differences may occur such as image rotation, magnification, changes in detector response, and non-common image motion. This paper develops a mathematical forward model for addressing magnification changes and a corresponding maximumlikelihood implementation of phase diversity. Performance using this physically correct forward model is compared with the more simple approach of resampling the data of the diversity channel.