Speckle intensity correlations are used to study polarized coherent
light propagating through scattering media. In particular, using measured speckle patterns as a function of frequency, second and third order intensity correlations with frequency are formed and then employed to determine the co-polarized and cross-polarized temporal impulse responses. The polarized impulse response provides information on the scattering medium that could aid in characterization. Determination of the temporal response from intensity only data is especially convenient in the optical domain.
A new image reconstruction algorithm is used to remove the effect of atmospheric turbulence on motion-compensated frame averaged data collected by a laser illuminated 2-D imaging system. The algorithm simultaneously computes a high resolution image and Fried's seeing parameter via a MAP estimation technique. This blind deconvolution algorithm differs from other techniques in that it parameterizes the unknown component of the impulse response as an average short-exposure point spread function. The utility of the approach lies in its application to laser illuminated imaging where laser speckle and turbulence effects dominate other sources of error and the field of view of the sensor greatly exceeds the isoplanatic angle.
For over forty years, attempts have been made to realize so-called superresolving pupil functions that permit smaller spot sizes than conventional resolution limits might initially suggest. This was usually achieved by manipulating the diffracted field's zero distribution and was accompanied by huge sidelobes and diminished intensity in the region of interest. By carefully manipulating the interference pattern generated by two optical vortices of different order, one can generate beams of light with dimensions that are spatially superresolved and of controllable cross-sectional shapes. The reason for this is that interfering vortices can possess a much higher density of zeros than one would expect. Recent results employing non-integer vortex beam interference can generate superresolved spots with an intensity close to the theoretical limit for the associated Strehl ratio.
The relative effects of spectral amplitude and phase errors on reconstructed images are studied in terms of the expected mean-square-error in the image. An appropriate mean-square-error appears to be that between reconstructed and original images that are scaled to have the same energy. Such an error metric appears to reflect the overall perceived quality of the images. Approximate relationships between spectral amplitude and phase errors that give rise to the same image mean-square-error are derived. For large amplitude errors saturation is significant and is studied by simulation. Simulations are used to illustrate these relationships. The relationship to phase dominance is discussed.
The problem of signal recovery from incomplete data is
investigated in the context of phase-space tomography. Particular
emphasis is given to the case where only a limited number
intensity measurements can be performed, which corresponds to
partial coverage of the ambiguity function of the signal. Based on
numerical simulations the impact of incomplete knowledge of the
ambiguity function on the performance of phase-space tomography is
illustrated. Several schemes to address the limited data problem
are evaluated. This includes the use of prior information about
the phase retrieval problem. In addition, the redundancy of
phase-space representations is investigated as the means to
recover the signal from partial knowledge of phase space. A
generalization of deterministic phase retrieval is introduced
which allows one to obtain a model based phase estimate for
bandlimited functions. This allows one to use prior information
for improving the phase estimate in the presence of noise.
Recently, we looked at applying our wide field-of-view tip-tilt turbulence visualization method to the visualization of the turbulent wake behind a jet aircraft. We have described successful results previously, in telescopically derived images of the moon’s surface and in horizontal surveillance imaging, in which small regions-of-interest (ROIs) within a turbulence-distorted image are registered to a prototype image. Unfortunately, when applied to a fast jet wake, the method did not produce useful results. This was found to be due to the fact that the background, which forms the reference image when the wake is absent, is heavily blurred when seen through the wake due to higher order wavefront distortions. Instead, the blurring made us wonder whether we could apply a Wiener filter between corresponding ROIs of the turbulence-distorted image and the reference image. This paper describes a new approach to registration that uses a Wiener filter within a scanned ROI to detect a local, space-varying point spread function or PSF. This new approach provides more robust shift information than our previously used cross correlation to describe the random wobble in the image sequence and also provides new information on the shape of the position-dependent blur PSF.
The resolution of images captured through ground-based telescopes is generally limited by blurring effects due to atmospheric turbulence. We have developed a method to estimate both the original objects and the blurring function from a sequence of noisy blurred images, simultaneously collected at different wavelengths (wavelength diversity). The assumption of common path-length errors across the diversity channels allows for a parallel deconvolution procedure that exploits this coupling. In contrast with previous work, no a priori assumptions about the object’s intensity distribution are required. The method is described, and preliminary results with real images collected with a bench-scale imaging system are presented, demonstrating the promise of the algorithm.
The possibility of obtaining spatial frequency information normally excluded by an aperture has been surmised, experimentally obtained in the laboratory, and observed in processed real world imagery. This opportunity arises through the intervention of a turbulent mass between the stationary wide-area object of interest and the short exposure, imaging instrument, but the frequency information is aliased, and must be de-aliased to render it useful. We present evidence of super-resolution in real-world surveillance imagery that is processed by hierarchical registration algorithms. These algorithms have been enhanced over those we previously reported. We discuss these enhancements and give examples of the use of the algorithm to gain information about the turbulence. To further reinforce the presence of super-resolution we present two methods for creating imagery warped by Kolmogorov turbulent phase screens, so that the results can be confirmed against true images.
X-ray computerized tomography (CT) and acoustic CT are two main medical imaging modalities based on two intrinsically different physical phenomena. X-ray CT is based on x-ray’s attenuation when x-ray passes through medium. It has been well known that the Radon transform is the imaging theory for x-ray CT. Photoacoustic CT is a type of acoustic CT, which is based on differentiating electromagnetic energy absorption among media. In 1998 a new 3D reconstruction concept, the P-transform, was proposed to serve the imaging theory for photoacoustic CT. In this paper it was rigorously proved that both x-ray CT and photoacoustic CT are governed by a unified imaging theory. 3D data acquisition can be completed in 2p stereoangle. This new imaging theory realized, in part, the dream of all physicists, including Albert Einstein, who have long believed that our world is ultimately governed by few simple rules.
In this paper we discuss image reconstruction algorithms in super-short-scan fan-beam and cone-beam computed tomography (CT). We propose a new fan-beam filtered back-projection algorithm which can obtain exact region of interest (ROI) reconstruction if and only if every projecting line passing through the ROI intersects the source trajectory, even if the scanning range is smaller than the half-scan. And we prove the algorithm is approximate when the projections are truncated. Furthermore, we expand the algorithm to cone-beam reconstruction. Then we simulate the algorithm on the computer and evaluate the noise properties of the new algorithm and the other algorithms. Numerical results in our work suggest that the new algorithm is generally less susceptible to data noise and less artifacts than the before algorithms. In particular, the new algorithm is easily and successfully expanded to cone-beam tomography when the source trajectory is a short-arc on the single circle or on the helical trajectory.
A method first employed for face recognition has been employed to analyse a set of chest x-ray images. After marking certain common features on the images, they are registered by means of an affine transformation. The differences between each registered image and the mean of all images in the set are computed and the first K principal components are found, where K is less than or equal to the number of images in the set. These form eigenimages (we have coined the term 'eigenchests') from which an approximation to any one of the original images can be reconstructed. Since the method effectively treats each pixel as a dimension in a hyperspace, the matrices concerned are huge; we employ the method developed by Turk and Pentland for face recognition to make the computations tractable. The K coefficients for the eigenimages encode the variation between images
and form the basis for discriminating normal from abnormal. Preliminary results have been obtained for a set of eigenimages formed from a set of normal chests and tested on separate sets of normals and patients with pneumonia. The distributions of coefficients have been observed to be different for the two test sets and work is continuing to determine the most sensitive method for detecting the differences.
An automated image analysis system for determination of myosin filament orientations in electron micrographs of muscle cross-sections is described. Analysis of the distribution of the orientations is important in studies of muscle structure, particularly for interpretation of x-ray diffraction data. Filament positions are determined using h-dome extraction and image filtering, based on grayscale reconstruction. Erroneous locations are eliminated based on lattice regularity. Filament orientations are determined by correlation with a template that incorporates the salient filament characteristics and classified using a Gaussian mixture model. Application to a number of micrographs and comparison with manual classifications of orientations shows that the system is effective in many cases.
We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.
Electrical Impedance Tomography (EIT) is a relatively new medical imaging technique in which electrodes are placed on the surface of the body, current is applied on the electrodes, the resulting voltage is measured on the electrodes, and an image is formed from the reconstructed conductivity distribution. One application is the real-time imaging of heart and lung function. In this case, data is collected on electrodes placed around the circumference of the patient's torso, and the 2-D inverse conductivity problem is solved numerically to form a cross-sectional image of the patient's chest. This research focuses on the further development of the D-bar reconstruction algorithm for 2-D EIT. The algorithm is based on the uniqueness proof by Nachman [Ann. of Math. 143 (1996)] for the 2-D inverse conductivity problem and uses the D-bar method of inverse scattering to solve the full nonlinear inverse problem. An important function arising in this method is the scattering transform. This function, while not physically measurable in experiments, is computed directly from the data, and is a key element of the D-bar equation that must be solved to obtain the conductivity. This paper describes two approaches for computing a regularized approximation to the scattering transform. The approaches are tested on experimental data
collected on a saline-filled tank containing agar heart, lungs, spine and aorta, simulating a cross-section of a human chest.
A 3-D linearization-based reconstruction algorithm for Electrical Impedance Tomography suitable for breast cancer detection using data collected on a rectangular array was introduced by Mueller et al. [IEEE Biomed. Eng., 46(11), 1999]. By considering the scenario as an electrostatic problem, it is possible to model the electrodes with various charges, facilitating the use of the Fast Multipole Method (FMM) for calculating particle interactions and also supporting the use of different electrode models. In this paper the use of FMM is explained and results in form of reconstructed images from experimental data show that this method is an improvement.
In this paper we present some variational functionals for the regularization of Magnetic Resonance (MR) images, usually corrupted by noise and artifacts. The mathematical problem has a Tikhonov-like formulation, where the regularization functional is a nonlinear variational functional. The problem is numerically solved as an optimization problem with a quasi-Newton algorithm. The algorithm has been applied to MR images corrupted by noise and to dynamic MR images corrupted by truncation artifacts due to limited resolution. The results on test problems obtained from simulated and real data are presented. The functionals actually reduce noise and artifacts, provided that a good regularizing parameter is used.
In this paper we describe an iterative algorithm, called Descent-TCG, based on truncated Conjugate Gradient iterations to compute Tikhonov regularized solutions of linear ill-posed problems. Suitable termination criteria are built-up to define an inner-outer iteration scheme for the computation of a regularized solution. Numerical experiments are performed to compare the algorithm with other well-established regularization methods. We observe that the best Descent-TCG results occur for highly noised data and we always get fairly reliable solutions, preventing the dangerous error growth often appearing in other well-established regularization methods. Finally, the Descent-TCG method is computationally advantageous especially for large size problems.
The usefulness of support constraints to achieve noise reduction in images is analyzed here using an algorithm-independent Cramer-Rao bound approach. Recently, it has been shown that the amount of noise reduction achievable using support as a constraint is a function of the image-domain noise correlation properties. For image-domain delta-correlated noise sources (such as Poisson and CCD read noise), applying a support constraint does not reduce noise in the absence of deconvolution due to the lack of spatial correlation. However, when deconvolution is included in the image processing algorithm, the situation changes significantly because the deconvolution operation imposes correlations in the measurement noise. Here we present results for an invertible system blurring function showing how noise reduction occurs with support and deconvolution. In particular, we show that and explain why noise reduction preferentially occurs at the edges of the support constraint.
In the last two decades a variety of super-resolution (SR) methods have been proposed. These methods usually address the problem of fusing a set of monochromatic images to produce a single monochromatic image with higher spatial resolution. In this paper we address the dynamic and color SR problems of reconstructing a high-quality set of colored super-resolved images from low-quality mosaiced frames. Our approach includes a hybrid method for simultaneous SR and demosaicing, this way taking into account practical color measurements encountered in video sequences. For the case of translational motion and common space-invariant blur, the proposed method is based on a very fast and memory efficient approximation of the Kalman filter. Experimental results on both simulated and real data are supplied, demonstrating the presented algorithm, and its strength.
Superresolution of images by data inversion is defined as extrapolating measured Fourier data into regions of Fourier space where no measurements have been taken. This type of superresolution can only occur by data inversion. There exist two camps of thought regarding the efficacy of this type of superresolution: the first is that meaningful superresolution is unachievable due to signal-to-noise limitations, and the second is that meaningful superresolution is possible. Here we present a framework for describing superresolution in a way that accommodates both points of view. In particular, we define the twin concepts of primary and secondary superresolution and show that the first camp is referring to primary superresolution while the second group is referring to secondary superresolution. We discuss the implications of both types of superresolution on the ability of data inversion to achieve meaningful superresolution.
We discuss an approach to solving the inverse scattering problem using homomorphic filtering and the difficulties that have been experienced in the past in trying to implement it in practice. Solving this problem has important consequences for a number of imaging and remote sensing problems as well as structure-synthesis problems. We show that the problem reduces to one of needing to preprocess the measured data in order that the nonlinear filtering succeeds and gives meaningful recontstructions. We discuss the steps taht have to be taken to achieve this and show that a sufficient condition to obtain a solution is that the data-derived function to be filtered is made close to a minimum phase function. This minimum-phase property is well understood in one dimensional problems but less so in two or higher dimensions. Another significant practical issue is that for inverse scattering problems, in contrast to inverse synthesis problems, only limited noisy data are available from which to compute the structure. These factors are discussed and we note that solving the inverse scattering problem immediately provides a solution to the inverse synthesis problem.
Designed to retrieve near-surface winds over the ocean, the SeaWinds scatterometer makes 13.4 GHz Ku-band measurements of the normalized radar backscatter of the Earth's surface from which the near-surface vector (speed and direction) wind is estimated. Conventional processing of the backscatter measurements results in 25 km resolution winds. However, by applying reconstruction algorithms the backscatter can be estimated at much finer resolution, albeit at reduced accuracy. This innovative application of reconstruction theory has a number of novel elements including irregular sampling, a two dimensional vector signal, multiplicative and additive noise, and a nonlinear transfer function between the measurements and the signal. This paper describes the high resolution wind retrieval problem, the solution approaches adopted, and sample results.
The penetrating nature and atomic-scale wavelength of X-ray radiation
makes the possibility of an X-ray microscope a very exciting prospect.
Unfortunately, existing X-ray optics are far less efficient than
their visible light counterparts. An attractive alternative to optics
is computational inversion of the far-field coherent X-ray diffraction (CXD), which can be measured using modern X-ray sources.
Thus we seek to defeat the so-called phase problem by iteratively seeking a set of phases consistent with the CXD measurement and some physical real-space constraints. We have found the behavior of fitting algorithms to be qualitatively different for simulated diffraction patterns and measured coherent X-ray intensity patterns. We will compare the convergence of the inversion of CXD patterns from simply shaped metal crystals with simulation.
Airflow over mountainous terrain can produce atmospheric waves in the lee of the mountains that have large vertical air velocities. These waves are used as sources of lift by sailplane pilots. Methods are developed for inverting flight data of airspeed and GPS-derived position to obtain estimates of the vector windspeed in mountain waves. Data from flight path segments with significantly different ground velocities within a region of constant windspeed give a well-determined solution for the windspeed. The methods are applied to flight data from a Perlan Project flight in lee waves of the Sierra Nevada Mountains in California.