It is often the case in tomography that a scanner is unable to collect a full set of projection data. Reconstruction
algorithms that are not set up to handle this type of problem can lead to artifacts in the reconstructed images
because the assumptions regarding the size of the image space and/or data space are violated. In this study,
we apply two recently developed geometry-independent methods to fully 3D multi-slice spiral CT image reconstruction.
The methods build upon an existing statistical iterative reconstruction algorithm developed by our
group. The first method reconstructs images without the missing data, and the second method seeks to jointly
estimate the missing data and attenuation image. We extend the existing results for the 2D fan-beam geometry
to multi-slice spiral CT in an effort to investigate some challenges in 3D, such as the long object problem. Unlike
the original formulation of the reconstruction algorithms, a regularization term was added to the objective
function in this work. To handle the large number of computations required by fully 3D reconstructions, we
have developed an optimized parallel implementation of our iterative reconstruction algorithm. Using simulated
and clinical datasets, we demonstrate the effectiveness of the missing data approaches in improving the quality
of slices that have experienced truncation in either the transverse or longitudinal direction.
Algorithms based on alternating minimization (AM) have recently been derived for computing maximum-likelihood images in transmission CT, incorporating accurate models of the transmission-imaging process. In this work we report the first fully three-dimensional implementation of these algorithms, intended for use with multi-row detector spiral CT systems. The most demanding portion of the computations, the three-dimensional projections and backprojections, are calculated using a precomputed lookup table containing a discretized version of the point-spread function that maps between the measurement and image spaces. This table accounts for the details of the scanner. A cylindrical phantom with cylindrical and spherical inserts of known attenuation was scanned with a Siemens Sensation 16, which was employed in a rapid, spiral acquisition mode with 16 active detector rows. These data were downsampled and reconstructed using a monoenergetic version of our AM algorithm. The estimated attenuation coefficients closely match the known coefficients for the cylinder and the embedded objects. We are investigating methods for further accelerating these computations by using a combination of techniques that reduce the time of each iteration and that increase the convergence of the log-likelihood from iteration to iteration.
We have developed a model for transmission tomography that views the detected data as being Poisson-distributed photon counts. From this model, we derive an alternating minimization (AM) algorithm for the purpose of image reconstruction. This algorithm, which seeks to minimize an objective function (the I-divergence between the measured data and the estimated data), is particularly useful when high-density objects are present in soft tissue and standard image reconstruction algorithms fail. The approach incorporates inequality constraints on the pixel values and seeks to exploit known information about the high-density objects or other priors on the data. Because of the ill-posed nature of this problem, however, the noise and streaking artifacts in the images are not completely mitigated, even under the most ideal conditions, and some form of
regularization is required. We describe a sieve-based approach,
which constrains the image estimate to reside in a subset of the
image space in which all images have been smoothed with a Gaussian kernel. The kernel is spatially varying and does not smooth across known boundaries in the image. Preliminary results show effective reduction of the noise and streak artifacts, but indicate that more work is needed to suppress edge overshoots.
We propose an alternating minimization (AM) image estimation algorithm for iteratively reconstructing transmission tomography images. The algorithm is based on a model that accounts for much of the underlying physics, including Poisson noise in the measured data, beam hardening of polyenergetic radiation, energy dependence of the attenuation coefficients and scatter. It is well-known that these nonlinear phenomena can cause severe artifacts throughout the image when high-density objects are present in soft tissue, especially when using the conventional technique of filtered back projection (FBP). If we assume no prior knowledge of the high-density object(s), our proposed algorithm yields much improved images in comparison to FBP, but retains significant streaking between the high-density regions. When we incorporate the knowledge of the attenuation and pose parameters of the high-density objects into the algorithm, our simulations yield images with greatly reduced artifacts. To accomplish this, we adapted the algorithm to perform a search at each iteration (or after every n iterations) to find the optimal pose of the object before updating the image. The final iteration returns pose values within 0.1 millimeters and 0.01 degrees of the actual location of the high-density structures.
In our earlier work, we focused on pose estimation of ground- based targets as viewed via forward-looking passive infrared (FLIR) systems and laser radar (LADAR) imaging sensors. In this paper, we will study individual and joint sensor performance to provide a more complete understanding of our sensor suite. We will also study the addition of a high range- resolution radar (HRR). Data from these three sensors are simulated using CAD models for the targets of interest in conjunction with XPATCH range radar simulation software, Silicon Graphics workstations and the PRISM infrared simulation package. Using a Lie Group representation of the orientation space and a Bayesian estimation framework, we quantitatively examine both pose-dependent variations in performance, and the relative performance of the aforementioned sensors via mean squared error analysis. Using the Hilbert-Schmidt norm as an error metric, the minimum mean squared error (MMSE) estimator is reviewed and mean squared error (MSE) performance analysis is presented. Results of simulations are presented and discussed. In our simulations, FLIR and HRR sensitivities were characterized by their respective signal-to-noise ratios (SNRs) and the LADAR by its carrier-to-noise ratio (CNR). These figures-of-merit can, in turn, be related to the sensor, atmosphere, and target parameters for scenarios of interest.
Our work focuses on pose estimation of ground-based targets viewed via multiple sensors including forward-looking infrared radar (FLIR) systems and laser radar (LADAR) range imagers. Data from these two sensors are simulated using CAD models for the targets of interest in conjunction with Silicon Graphics workstations, the PRISM infrared simulation package, and the statistical model for LADAR described by Green Shapiro. Using a Bayesian estimation framework, we quantitatively examine both pose-dependent variations in performance, and the relative performance of the aforementioned sensors when their data is used separately or optimally fused together. Using the Hilbert-Schmidt norm as an error metric, the minimum mean squared error (MMSE) estimator is reviewed and its mean squared error (MSE) performance analysis is presented. Results of simulations are presented and discussed.
Two different phase-estimation methods that have been developed for the computation of the optical path-length (OPL) distribution of a specimen from DIC image for the determination of the OPL distribution. The second phase- estimation method is based on the conjugate-gradient optimization method and estimates the OPL distribution using rotational-diversity DIC images; i.e. multiple DIC images obtained by rotating the specimen. For this study, 24 different DIC images of a single bovine spermatozoa head acquired by rotating the cell by approximately 15 degrees between images. The images were registered and aligned using fiducial marks, and then processed with both methods. Results obtained with the filtering method were found to be dependent on the orientation of the cell with respect to the shear direction. Comparison of the integrated optical path length (IOPL) computed with the filtering method and the rotational-diversity method using two, four and eight DIC images at different rotation angels showed that the IOPL estimated with the rotational-diversity method is less dependent on the rotation angel, even when only two images separated by 90-degree cell rotation are used for the phase estimation. Our results show that the use of rotational- diversity images in the determination of the OPL distribution is very beneficial because it overcomes the directional dependence of DIC imaging.
Differential-interference-contrast (DIC) microscopy is a powerful technique for the visualization of unstained transparent specimens, thereby allowing in vivo observations. Quantitative interpretation of DIC images is difficult because the measured intensity is nonlinearly related to the gradient of a specimen's optical-path-length distribution along the shear direction. The recent development of reconstruction methods for DIC microscopy permits the calculation of a specimen's optical-path-length distribution or phase function and provides a new measurement technique for biological applications. In this paper we present a summary of our work on quantitative imaging with a DIC microscope. The focus of our efforts has been in two areas: 1, model development and testing for 3D DIC imaging; and 2, development of a phase-estimation method based on this model. Our method estimates a specimen's phase function using rotational-diversity DIC images, i.e. multiple DIC images obtained by rotating the specimen. Test objects were viewed with a conventional DIC microscope using monochromatic light, and images were recorded using a cooled CCD camera. Comparison of the images to model predictions show good qualitative and quantitative agreement. Results obtained from testing the phase-estimation method with 2D simulations and with measured DIC images demonstrate that an estimate of an object's phase function can be obtained even from a single DIC image and that the estimated phase becomes quantitatively better as the number of rotational-diversity DIC images increases.
Infrared scenes are modeled as consisting of two kinds of targets: flexible 2-D models for simple shapes and rigid 3-D faceted models for detailed targets. The flexible models permit rapid saccadic detection of targets and accommodate 'clutter' objects not present in the target library. The rigid model library contains specific vehicles or other objects we wish to discriminate. A likelihood model based on sensor statistics is combined with a prior distribution on possible scenes to form a posterior distribution for Bayesian inference. Nuisance parameters associated with the radiant intensities of the background and object facets are adaptively estimated as the inference proceeds. A general metropolis- Hastings acceptance/rejection algorithm for sampling from the posterior distribution is proposed.
The derivation of an iterative method for phase reconstruction from differential-interference-contrast (DIC) images is presented here. Because DIC imaging is direction sensitive, in our approach we estimate a specimen's phase function using multiple DIC images obtained by rotating the specimen. Results obtained from testing the method via two- dimensional simulations demonstrate that the use of multiple DIC images at different specimen rotations yield phase reconstructions that more closely resemble the phase function of phantoms than the unprocessed DIC images. Improvement in resolution was also achieved: two points separated by half the Rayleigh resolution limit for coherent illumination not resolved in the unprocessed DIC image, were successfully resolved in the reconstructed phase images. Our results show that phase reconstructions are quantitatively better and resolution is improved when two or more DIC images are used in the reconstruction.
A system is proposed for joint tracking and recognition of airborne targets from the observations of radar sensors. It is assumed that the data available for the estimation of target orientation and recognition include sequences of range-profiles from a high resolution radar. Inference is performed using the posterior distribution on the complete parameters space, which includes the number of targets as well as their positions, orientations, and target types. The algorithm is critically dependent on appropriate sensor and target models, in the form of a likelihood for the range profiles given the target orientations, and a prior on the orientations determined by the target dynamics. Deterministic and stochastic models for high resolution radar data are presented, and the likelihood function under the deterministic model is examined. The viability of our approach is demonstrated through simulations that address two simplified recognition scenarios. The first simulation investigates joint tracking and recognition of a single maneuvering target from the simulated observations of both a cross-array tracking radar and a high resolution radar. In the second simulation, orientation estimation and recognition are performed for a single target which is approaching an airborne radar platform. Results from these simulations showing performance are given.
An estimation-based method for accommodating nonuniform flat-field response of a focal-plane array is described. This method employs image data directly for performing the flat-field correction and does not rely on a separate flat- field calibration-measurement. This is accomplished by dithering the camera so that the object's focal-plane images acquired in a series of snapshots appear in different positions against the fixed-pattern artifacts caused by nonuniformity of the focal-plane array.
Grenander's pattern theory offers a unified approach to characterizing variability in complex systems. Automatic target recognition systems for forward-looking infrared sensors must be robust to three kinds of variability: (1) Geometric variability--Target appearances vary with their orientations and positions; (2) Image variability--Target appearances vary with their thermodynamic state, and natural backgrounds consist of widely varying textures; (3) Complexity/scene variability--The number of targets encountered will not be known in advance, and targets may enter or leave the scene at random times. Pattern theoretic algorithms based on jump-diffusion processes which accommodate variabilities (1) and (3) have been proposed. The diffusions account for (1) by estimating positions and orientations, and the jumps account for (3) by adding and removing hypothesized targets and changing target types. Here we extend the work to better accommodate (2) by summarizing the thermodynamic state of targets with a parsimonious set of variables which become nuisance parameters in the Grenander/Bayesian formulation.
Nomarski Differential-Interference-Contrast (DIC) microscopy is a widely used method for imaging transparent specimens that are not visible with ordinary light microscopy. DIC microscopy enhances contrast in the images of such specimens by converting differential phase changes to intensity variations via the method of light interference. These phase changes are introduced in light as it passes through regions of different refractive index within a specimen. In this paper, the development of an imaging model that describes 3D DIC imaging under partially-coherent illumination is presented. Our approach in deriving the model involves the derivation of a 2D model and its extension to three dimensions, assuming weak optical interactions within the specimen. The coherent limit of our 2D model coincides with existing DIC models. Model predictions generated with the coherent limit of the 3D model are compared to real DIC images acquired from imaging phantom specimens. It is shown that the model predictions resemble the real images obtained with the condenser aperture closed better than the images obtained with the aperture open. This result confirms the need for the general model that we have derived.
Our pattern theoretic approach to the automated understanding of complex scenes brings the traditionally separate endeavors of detection, tracking, and recongition together into a unified jump-diffusion process. Concentrating on an air-to-ground scenario, we postulate data likelihood models for a low-resolution, wide field-of-view millimeter wave radar (for detection) and a high-resolution, narrow field-of-view forward-looking infrared sensor (for recognition). The interaction between the sensors is governed by a jump-diffusion process which provides a mathematical foundation for saccadic detection and computationally efficient target hypothesis during recognition. New objects are detected and object types are recognition through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. The methodology outlined may be applied to any scenario involving the fusion of low-resolution and high-resolution sensor data.
Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.
We take a pattern theoretic approach to recognizing and tracking ground-based targets in sequences of forward-looking infrared images acquired from an airborne platform. A rich set of transformations on objects represented by 3D faceted models are formulated to accommodate the variability found in FLIR imagery. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. A jump-diffusion process empirically generates the posterior distribution. The jumps accommodate the discrete aspects of the estimation problem, such as adding and removing hypothesized targets and changing target types. Between jumps, a diffusion process refines the hypothesis by following the gradient of the posterior. Since the likelihood function may include likelihoods from other sensors and may be defined over past and current times, interframe processing and sensor fusion are natural consequences of the pattern theoretic approach.
A model for the generation of reflectivity profiles is presented for use in a radar target recognition system. The data are assumed to come from two sensors: a high range resolution radar and a tracking radar. The object is simultaneously tracked and identified using estimation theoretic methods by comparing a sequence of received range profiles to range profiles generated from surface templates. The tracking data are used to form priors on the position and orientation of the object. The templates consist of surface descriptions comprised of electromagnetically large patches tiling the entire object. The predicted return is computed from several quantities. First, the reflectivity range profile is computed from the patches incorporating a shading function. The physical optics approximation is that patches not directly illuminated by the transmitted signal to not contribute to the return signal. Second, the reflected signal is approximated by the convolution of the transmitted signal with the range profile. Third, the receiver design yields the actual I-Q data available for processing.
New algorithms are summarized for recovering an object's intensity distrubution from the second- or third-order autocorrelation function, or equivalently, the Fourier magnitude or bispectnim, of the intensity.