A problem of blind deconvolution arises when attempting to restore a short-exposure a short-exposure image that has been degraded by random atmospheric turbulence. We attack the problem by using two short-exposure images as data inputs. The Fourier transform of each is taken, an the two are divided. The unknown object spectrum cancels. What remains is the quotient of the two unknown transfer functions that formed the images. These are expressed, via the sampling theorem, as Fourier series in the corresponding PSFs, the unknowns of the problem. Cross-multiplying the division equation gives an equation that is linear in the unknowns. However, the problem is rank deficient in the absence of prior knowledge. We use the prior knowledge that the object and the PSFs have finite support extensions, and also are positive. The linear problem is least-squares solved many times over, assuming different support values and enforcing positivity. The two support values that minimize the rms image data inconsistency define the final solution. This regularizes the solution to the presence of 4-15 percent additive noise of detection.
The use of maximum entropy image restoration has, heretofore, required recursive, or iterative, search routines. This paper, by contrast, describes a one-pass, closed-form maximum entropy algorithm. The approach derives from minimization of a Log-L2 error norm between object and reconstruction. The resulting output has the form of the exponential of a Wiener-type filtering of the image data. The logarithmic nature of the norm gives rise to a tendency toward constant relative error across the output field.
The nonlinear wave equation an be derived from a principle of extreme physical information (EPI) K. This is for a scenario where a probe electron moves through a medium in a weak magnetic field. The field is caused by a probabilistic line current source. Assume that the probability current density S of the electron is approximately constant, and directed parallel to the current source. Both the source probability amplitudes (rho) and the electron probability amplitudes (phi) are unknowns (called 'modes') of the problem. The net physical information K here consists of two components: functional K1[(phi) ] due to modes (phi) and K2[(rho) ] due to modes (rho) , respectively. To form K1[(phi) ], the Fisher information functional I1[(phi) ] for the electron modes is first constructed. This is of a fixed mathematical form. Then, a unitary transformation on (phi) to a physical space is sought that leaves I1 invariant, as form J1. This is, of course, the Fourier transformation, where the transform coordinates are momenta and I1 is essentially the mean-square electron momentum. Information K1[(phi) ] is then defined as (I1 - J1). Information K2 is formed similarly. The total information K is formed as the sum of the two components K1[(phi) ] and K2[(rho) ], by the additivity of Fisher information, and is then extremized in both (phi) and (rho) . Extremizing first in (rho) gives a Taylor series in powers of (phi) n*(phi) n, which is cut off at the quadratic term. Back-substituting this into the total Lagrangian gives one that is quadratic in (phi) n*(phi) n. Now varying (phi) * gives the required cubic wave equation in (phi) .
Consider the evolution of a temporal signal X(t) that is an intrinsic random field. In the sense of a certain measurement-estimation experiment, the state of disorder of X(t) should increase toward an equilibrium state. The disorder of X(t) is measured by its `physical information' I, and the equilibrium state is determined by the condition that I be an extremum. The equilibrium state is shown to have a power spectrum S((omega) ) of the form (omega) -(alpha ), 1 <EQ (alpha) <EQ 2, that of 1/f noise.
An information divergence, such as Shannon mutual information, measures the `distance' between two probability density functions (or images). A wide class of such measures, called (alpha) -divergences, with desirable properties such as convexity over all space, has been defined by Amari. Renyi's information D(alpha ) is an (alpha) -divergence. Because of its convexity property, minimization of D(alpha ) is easily attained. Minimization accomplishes minimum distance (maximum resemblance) between an unknown image and a known, reference image. Such a biasing effect permits complex images, such as occur in ISAR imaging, to be well reconstructed. There, the bias image may be constructed as a smooth version of the linear. Fourier reconstruction of the data. Examples on simulated complex image data, with and without noise, indicate that the Renyi reconstruction approach permits super-resolution in low-noise cases, and higher fidelity over ordinary, linear reconstructions in higher-noise cases.
Proc. SPIE. 2029, Digital Image Recovery and Synthesis II
KEYWORDS: Signal to noise ratio, Point spread functions, Digital image processing, Image processing, Image restoration, Fourier transforms, Digital imaging, Turbulence, Atmospheric turbulence, Stochastic processes
Consider a scenario of imaging through atmospheric turbulence. A digital approach for processing out the turbulence from degraded images is under development. In contrast to past approaches only two short-exposure images are needed as inputs. Here we test the benefits to be gained by inserting object power spectrum information into the algorithm.
Familiar information measures such as due to Shannon, Kullback, etc., may be related to one basic information measure called the characteristic information I. The latter is the trace of the Fisher information matrix. The relation is a Poisson equation with I as the driving force. Thus, for small-uncertainty cases, given I all these other information measures can be generated as solutions to a Poisson equation. If one of the parameters of the system is time- like, the Poisson equation becomes a wave equation, and the information may be said to `flow.'
The concept of Fisher information I is introduced. Smoothness properties of I, and its relation to entropy, disorder, and uncertainty are explored. Information I is generalized to N- component problems, and is expressed both in direct and Fourier spaces. Applications to ISAR radar imaging and to the derivation of physical laws are discussed.
Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.
Phase constrast microscopy is famous for allowing direct visualization of an object phase specimen. That is, the output intensity image is linear in the conjugate phase object. However, it is well known that this ideal result is only an approximation that works well for small phase values.1