Samples a few micrometers in total size offer a challenge to both x-ray and electron tomography. X-ray tomography originated imaging the human body with millimeter resolution, but the resolution has been reduced by over 7 orders of magnitude by the use of synchrotron sources and Fresnel zone plates, leading to an achieved resolution of 20 nm in favorable cases. Further progress may require phase retrieval. Electron tomography originated on very thin samples (perhaps 100 nm thick) but recently samples of over 1 micrometer have been studied with conventional instruments. The study of thicker samples requires understanding tomography in the multiple scattering regime.
Marine seismic imaging involves reconstructing subsurface reflectivity from some scattered acoustic data generally observed near the ocean surface. The procedure can be framed as a linearized inverse scattering problem and is often called least-squares migration (LSM). LSM has been shown to be effective in optimizing the reconstruction of subsurface reflectivity, particularly in cases of missing or undersampled data or uneven subsurface illumination.
In standard LSM, the reflectivity model parameters are usually defined as a grid of point scatterers over the area or volume to be migrated. We propose an approach to pre-stack LSM using the Dual Tree Complex Wavelet Transform (DT-CWT) as a basis for the reflectivity.
Wavelet bases have a reputation for decorrelating or diagonalizing a range of non-stationary signals. In LSM, diagonalization of the model space affords a more accurate but practical representation of prior information about the subsurface reflectivity model parameters. The DT-CWT is chosen for its key advantages compared to other wavelet transforms. These include shift invariance, directional selectivity, perfect reconstruction, limited redundancy and efficient computation.
A complex wavelet based LSM algorithm, derived in a Bayesian framework, is presented. Minimization of the least-squares cost function is performed in the wavelet domain rather than the standard reflectivity model domain.
The visualization of settled solid layers in vessels have many applications, of interest here is for facilitating the efficient retrieval of high-level radioactive waste (HLW) from underground storage tanks at Department of Energy sites. Visualization of the solids interface with opaque liquid above can’t be accomplished by regular optical imaging methods and hence our interest in using Electrical Resistance Tomography (ERT). The ideal arrangement for 3-D ERT imaging inside tanks is to use a multiple ring electrode system, which is complex and expensive. This research describes ERT imaging done with a single linear array as a benchmark study to ascertain the viability of its imaging of the interface. Experiments focused upon systematic analysis of many ERT tomograms of two simple settled solids layers (horizontal, 30o) using pulverized kaolin clay (10μdia) and water. Visualization was done using commercial ERT software. Injection current and electrode orientation were the two system parameters varied and analyzed. Reproducibility, accuracy and reliability of this ERT system will be presented.
In contrast to X-rays, ultrasound propagates along a curved path due to spatial variations in the refraction index of the medium. Thus, for ultrasonic TOF tomography, the propagation path of the ultrasound must be known to correctly reconstruct the slice image. In this paper, we propose a new path determination algorithm, which is essentially a numerical solution of the eikonal equation viewed as a boundary value problem. Due to the curved propagation path of ultrasound, the image reconstruction algorithm takes the algebraic approach, for instance, the ART or the SART. Note that the image reconstruction step requires the propagation path and the paths can be determined only if the image is known. Thus, an iterative approach is taken to solve this apparent dilemma. First, the slice image is initially reconstructed assuming straight propagation paths. Then the paths are computed based on the recently reconstructed image using our path determination algorithm and used to update the reconstructed image. The process of the image reconstruction and the path determination repeats until convergence. This is the approach taken in this paper and it is tested using both a simulation data and a real concrete structure scanned by a mechanical scanner.
Since event-related components in MEG (magnetoencephalography) studies are often buried in background brain activity and environmental and sensor noise, it is a standard technique for noise reduction to average over multiple stimulus-locked responses or “epochs”. However this also removes event-related changes in oscillatory activity that are not phase locked to the stimulus. To overcome this problem, we combine time-frequency analysis of individual epochs with corticallyconstrained imaging to produce dynamic images of brain activity on the cerebral cortex in multiple time-frequency bands. While the SNR in individual epochs is too low to see any but the strongest components, we average signal power across epochs to find event related components on the cerebral cortex in each frequency band. To determine which of these components are statistically significant within an individual subject, we threshold the cortical images to control for false positives. This involves testing thousands of hypotheses (one per surface element and time-frequency band) for significant experimental effects. To control the number of false positives over all tests, we must therefore apply multiplicity adjustments by controlling the familywise error rate, i.e. the probability of one or more false positive detections across the entire cortex. Applying this test to each frequency band produces a set of cortical images showing significant eventrelated activity in each band of interest. We demonstrate this method in applications to high density MEG studies of visual attention.
Diffuse optical tomography is modelled as an optimization problem to find the absorption and scattering coefficients that minimize the error between the measured photon density function and the approximated one computed using the coefficients. The problem is composed of two steps: the forward solver to compute the photon density function and its Jacobian (with respect to the coefficients), and the inverse solver to update the coefficients based on the photon density function and its Jacobian attained in the forward solver. The resulting problem is nonlinear and highly ill-posed. Thus, it requires large amount of computation for high quality image. As such, for real time application, it is highly desirable to reduce the amount of computation needed. In this paper, domain decomposition method is adopted to decrease the computation complexity of the problem. Two level multiplicative overlapping domain decomposition method is used to compute the photon density function and its Jacobian at the inner loop and extended to compute the estimated changes in the coefficients in the outer loop. Local convergence for the two-level space decomposition for the outer loop is shown for the case when the variance of the coefficients is small.
Can we recover a signal f∈RN from a small number of linear measurements? A series of recent papers developed a collection of results showing that it is surprisingly possible to reconstruct certain types of signals accurately from limited measurements. In a nutshell, suppose that f is compressible in the sense that it is well-approximated by a linear combination of M vectors taken from a known basis Ψ. Then not knowing anything in advance about the signal, f can (very nearly) be recovered from about M log N generic nonadaptive measurements only. The recovery procedure is concrete and consists in solving a simple convex optimization program.
In this paper, we show that these ideas are of practical significance. Inspired by theoretical developments, we propose a series of practical recovery procedures and test them on a series of signals and images which are known to be well approximated in wavelet bases. We demonstrate that it is empirically possible to recover an object from about 3M-5M projections onto generically chosen vectors with an accuracy which is as good as that obtained by the ideal M-term wavelet approximation. We briefly discuss possible implications in the areas of data compression and medical imaging.
In this paper we present a source path for the purpose of exact cone-beam reconstruction using a C-arm X-ray imaging system. The proposed path consists of two intersecting segments, each of which is a short-scan. Any C-arm capable of a short-scan sweep can thus be used to obtain data on our proposed source path as well, since it only requires an additional sweep on a tilted plane. This tilt can be achieved by either using the propeller axis of mobile C-arms, or the vertical axis of ceiling mounted C-arms. While the individual segments are only capable of exact reconstruction in their mid-plane, we show that the combined path is capable of exact reconstruction within an entire volumetric region. In fact, we show that the
largest sphere that can be captured in the field of view of the C-arm can be exactly reconstructed if the tilt between the planes is at least equal to the cone-angle of the system. For the purpose of
cone-beam inversion we use a generalized cone-beam filtered backprojection algorithm (CB-FBP).
The exactness of this method relies on the design of a set of redundancy weights, which we
explicitly evaluate for the proposed dual short-scan source path.
We will present a new approach for pattern matching which is applicable to very high dimensional features. This approach is based on maximizing a novel non-linear measure of "mutual information" which is constructed from the k nearest neighbor graph through the feature vector set.
In all imaging systems, the underlying process introduces undesirable distortions that cause the output signal to be a warped version of the input. When the input to such systems can be controlled, pre-warping techniques can be employed which consist of systematically modifying the input such that it cancels out (or compensates for) the process losses. In this paper, we focus on the mask (reticle) design problem for 'optical micro-lithography', a process similar to photographic printing used for transferring binary circuit patterns onto silicon wafers. We use a pixel-based mask representation and model the above process as a cascade of convolution (aerial image formation) and thresholding (high-contrast recording) operations. The pre-distorted mask is obtained by minimizing the norm of the difference between the 'desired' output image and the 'reproduced' output image. We employ the regularization framework to ensure that the resulting masks are close-to-binary as well as simple and easy to fabricate.
Finally, we provide insight into two additional applications of pre-warping techniques. First is 'e-beam lithography', used for fabricating nano-scale structures, and second is 'electronic visual prosthesis' which aims at providing limited vision to the blind by using a prosthetic retinally implanted chip capable of electrically stimulating the retinal neuron cells.
There are many significant applications of Fourier Shape Descriptor characterization of boundaries of regions in images. Whenever it is desirable to compare two shapes, independent of rotation, starting point, or compensate for magnification, Fourier Shape Descriptors (FSDs) have merits. FSDs have been proposed for the automatic assessment of packaging; to check alignment of objects for automation; and characterize visual objects in video coding, and compare biomedical regions in medical images. This paper presents a technique to parameterize the boundary of the region of interest (ROI) that utilizes the casting of rays from the center of mass of the region of interest outward to points in the image that lie on the edge of the ROI. This is essentially another technique to obtain the R-S parametrization. At each step the process utilizes the sections of the boundary have radii that are a simple function of theta. The procedure then merges these simple boundary sections to create a periodic complex valued function of the boundary parameterized by a parameter s that is not required to be a function of theta. Once the complex periodic sequence is obtained, the Fourier Transform is taken resulting in the corresponding Fourier Shape Descriptors. Since the technique seeks the intersection of a known ray with the boundary (it is not boundary following), the worst-case behavior of the technique is easily calculated making it suitable for real-time applications. The technique is robust to incomplete boundaries of objects, and can be readily extended to three-dimensional datasets (spherical harmonics). The a simpler version of the technique is currently being used in the automatic selection of the axis of symmetry in Magnetic Resonance Images of the brain, and we will demonstrate the application of the technique on these types of datasets, although the technique has general application.
In general, image restoration problems are ill posed and need to be regularized. For applications such as realtime video, fast restorations are also needed to keep up with the frame rate. Restoration based on 2D FFT's provides a fast implementation assuming a constant regularization term over the image. Unfortunately, this assumption creates significant ringing artifacts on edges as well as blurrier edges in the restored image. On the other hand, shift-variant regularization will reduce edge artifacts and provide better quality but it destroys the structure that makes use of the 2D FFT possible, thus no longer have the computational efficiency of the FFT. In this paper, we use a Bayesian approach-maximum a posteriori (MAP) estimation to compute an estimate of the original image given the blurred image. To avoid the smoothing of edges, shift-variant regularization must be used. The Huber-Markov random field model is applied to preserve the discontinuities on edges. For fast minimization of the above model, a new algorithm involving the Sherman-Morrison matrix inversion lemma is
This results in a restored image with good edge preservation and less computation. Experiments show restored images with sharper edges. Convergence is fast, and the computational speed can be improved considerably by breaking the image into subimages.
The sharpness of a printed image may suffer due to the presence of
material layers above and below the dye layers. These layers
contribute to scattering and surface reflections that make the
degradation in sharpness density-dependent. We present data that
illustrate this effect, and model the phenomenon numerically. A
digital non-linear sharpening filter is proposed to compensate for
this density-dependent blurring. The support and shape of this
filter is constrained to lie in a space spanned by a set of basis
filters that can be computed efficiently. Burt and Adelson's
Laplacian pyramid is used to develop an efficient scale-recursive
algorithm in which the image is decomposed into the high-pass basis images in a fine-to-coarse scale sweep, and the sharpened image along with a local density image is subsequently synthesized by a coarse-to-fine scale sweep using these basis images. The local density image is employed, in combination with a scale dependent gain function, to modulate the high-pass basis images in a space-varying fashion. A robust method is proposed for the estimation of the gain functions directly from measured data. Experimental results demonstrate that the proposed algorithm successfully compensates for media-related density dependent blurring.
We present a novel local image registration method based on adaptive
filtering techniques. The proposed method utilizes an adaptive filter
to track smoothly, locally varying changes in the motion field
between the images. Image pixels are traversed following a scanning order established by Hilbert curves to preserve the contiguity in the 2-D image plane. We have performed experiments using both simulated images and real images captured by a digital camera. The proposed adaptive filtering framework has been shown by experimental results to give superior performance compared to global 2-D parametric registration and Lucas-Kanade optical flow technique when the image motion consists of mostly translational motion. The simulation experiments show that the proposed image registration technique can also handle small amounts of rotation, scale and perspectivity in the motion field.
This paper presents a novel multi-channel image restoration algorithm. The main idea is to develop practical approaches to reduce optical blur from noisy observations produced by the sensor of a camera phone. An iterative deconvolution is applied separately to each color channel directly on the raw data obtained from the camera sensor. We use a modified iterative Landweber algorithm combined with an adaptive denoising technique. The employed adaptive denoising is based on Local Polynomial Approximation (LPA) operating on data windows, which are selected by the rule of Intersection of Confidence Intervals (ICI). In order to avoid false coloring due to independent component filtering in RGB space, we have integrated a novel regularization mechanism that smoothly attenuates the high-pass filtering near saturated regions. Through simulations, it is shown that the proposed filtering is robust with respect to errors in point-spread function (PSF) and approximated noise models. Experimental results show that the proposed processing technique produces significant improvement in perceived image resolution.
Studies in experimental neuroscience have found some evidence showing that the shapes of cortical surfaces of human brains might have certain connection with the neural functioning. This paper presents a morphological study of the cortical surfaces. The work consists of four major elements. First, we collect a sufficient number of 3D MRI datasets of brains that belong to different categories of people. Second, we extract the cortical surfaces from the 3D MRI datasets. Third, we apply statistical analysis to characterize the morphological features of the cortical surfaces. The last component is 3D visualization to illustrate the shapes and characteristics of cortical surfaces in an interactive environment.
In this paper, we propose a new region-based method for detecting mass tumors in digital mammograms. Our method uses principal component analysis (PCA) techniques to reduce the image data into a subspace with significantly reduced dimensionality using an optimal linear transformation. After the transformation, classification in the subspace is performed using a nearest neighbor classifier. We consider the detection of only mass abnormalities in this study. Micro calcifications, spiculated lesions, and other abnormalities are not considered. We implemented our method and achieved a 93% correct detection rate for mass abnormalities in our tests.
Obtaining hight quality ultrasound images at high frame rates has great medical importance, especially in applications where tissue motion is significant (e.g. the beating heart). Dynamic focusing and dynamic apodization can improve image quality significantly, and they have been implemented on the receive beam in state-of-the-art medical ultrasound systems. However implementing dynamic focusing and dynamic apodization on the transmit beam compromises frame rate. We present a novel transmit apodization scheme where a continuum of focal points can be obtained in one transmission, and uniform sensitivity and uniform point spread function can be achieved over very large range without reducing frame rate. Preliminary simulations demonstrate significant promises of the new technique.
Cardiac magnetic resonance studies have led to a greater understanding of the pathophysiology of ischemic heart disease. Manual segmentation of myocardial borders, a major task in the data analysis of these studies, is a tedious and time consuming process subject to observer bias. Automated segmentation reduces the time needed to process studies and removes observer bias. We propose an automated segmentation algorithm that uses an active surface to capture the endo- and epicardial borders of the left ventricle in a mouse heart. The surface is initialized as an ellipsoid corresponding to the maximal gradient inverse of variation (GICOV) value. The GICOV is the mean divided by the normalized standard deviation of the image intensity gradient in the outward normal direction along the surface. The GICOV is maximal when the surface lies along strong, constant gradients. The surface is then evolved until it maximizes the GICOV value subject to shape constraints. The problem is formulated in a Bayesian framework and is implemented using a Markov Chain Monte Carlo technique.
The spectral power distribution (SPD) of the light reflected from a matte surface patch in a three-dimensional complex scene depends not only on the surface reflectance of the patch but also on the SPD of the light incident on the patch. When there are multiple light sources in the scene that differ in location, SPD, and spatial extent, the SPD of the incident light depends on the location and the orientation of the patch. Recently, we have examined how well observers can recover surface color in rendered, binocularly-viewed scenes with more than one light source. To recover intrinsic surface color, observers must solve an inverse problem, effectively estimating the light sources present in the scene and the light from each that reaches the surface patch. We will formulate the forward and inverse problems for surface color perception in three-dimensional scenes and present experimental evidence that human observers can solve such problems [1-3]. We will also discuss how human observers estimate the spatial distribution of light sources and their chromaticities from the scene itself.
 Boyaci, Doerschner, Maloney (2004), Journal of Vision, 4, 664-679.
 Doerschner, Boyaci, Maloney (2004), Journal of Vision, 4, 92-105.
 Boyaci, Doerschner, Maloney (2004), AIC’05, submitted.
Binocular reconstruction of a 3D shape is an ill-conditioned inverse problem: in the presence of visual and oculomotor noise the reconstructions based solely on visual data are very unstable. A question, therefore, arises about the nature of a priori constraints that would lead to accurate and stable solutions. Our previous work showed that planarity of contours, symmetry of an object and minimum variance of angles are useful priors in binocular reconstruction of polyhedra. Specifically, our algorithm begins with producing a 3D reconstruction from one retinal image by applying priors. The second image (binocular disparity) is then used to correct the monocular reconstruction. In our current study, we performed psychophysical experiments to test the importance of these priors. The subjects were asked to recognize shapes of 3D polyhedra from unfamiliar views. Hidden edges of the polyhedra were removed. The recognition performance, measured by detectability measure d¢, was high when shapes satisfied regularity constraints, and was low otherwise. Furthermore, the binocular recognition performance was highly correlated with the monocular one. The main aspects of our model will be illustrated by a demo, in which binocular disparity and monocular priors are put in conflict.
We investigate design and estimation issues for using the standard color management profile architecture for general custom image enhancement. Color management profiles are a flexible architecture for describing a mapping from an original colorspace to a new colorspace. We investigate use of this same architecture for describing color enhancements that could be defined by a non-technical user using samples of the mapping, just as color management is based on samples of a mapping between an original colorspace and a new colorspace. As an example enhancement, we work with photos of the 24 color patch Macbeth chart under different illuminations, with the goal of defining transformations that would take, for example, a studio D65 image and reproduce it as though it had been taken during a particular sunset. The color management profile architecture includes a look-up-table and interpolation. We concentrate on the estimation of the look-up-table points from minimal number of color enhancement samples (comparing interpolative and extrapolative statistical learning techniques), and evaluate the feasibility of using the color management architecture for custom enhancement definitions.
Digital still cameras typically use a single optical sensor overlaid with RGB color filters to acquire a scene. Only one of the three primary colors is observed at each pixel and the full color image must be reconstructed (demosaicked) from available data. We consider the problem of demosaicking for images sampled in the commonly used Bayer pattern.
The full color image is obtained from the sampled data as a MAP estimate. To exploit the greater sampling rate in the green channel in defining the presence of edges in the blue and red channels, a Gaussian MRF model that considers the presence of edges in all three color channels is used to define a prior. Pixel values and edge estimates are computed iteratively using an algorithm based on Besag's iterated conditional modes (ICM) algorithm. The reconstruction algorithm iterates alternately to perform edge detection and spatial smoothing. The proposed algorithm is applied to a variety of test images and its performance is quantified by using the CIELAB delta E measure.