In this paper, we present a general study of the kinematics of deformable non-singular manifolds with codimension 1 evolving according to a first-order dynamics within a d- dimensional space, in terms of their intrinsic geometric properties. We formulate the local equations which describe instantaneous variations of their main differential and integral characteristics. In particular, a physical interpretation of curvature evolution in terms of reaction-diffusion-propagation processes is developed. Delocalizing these equations within the time domain leads to describing local evolution along the stream lines of the deformation field. Within this framework, local ergodicity property of curvature processes is underlined. Integrating further within the space domain leads to global evolution theorems. These results are then applied to the kinematical study of 2D and 3D active models of the inhomogeneous membrane/thin-plate under pressure type (g-snakes) when their optimization is performed via a purely dissipative Lagrangian deformation process. They yield a complete mathematical characterization of the instantaneous behavior of snake-like models.
The thin-plate spline, originally introduced as a technique for surface interpolation, serves as a very useful image warping tool for maps driven by a small set of landmarks (discrete geometric points that correspond biologically between forms). Earlier work extended this formalism to incorporate about correspondence of edge-directions, or edgels, at the landmarks. The constrained maps are singular perturbations of a spline on the assigned landmarks corresponding to its augmentation by other landmarks at indeterminate, ultimately infinitesimal separation. The present manuscript recasts that earlier analysis in a new notation that greatly eases the extension to arbitrary linear (and linearizable) constraints on the derivatives of the warping function. We sketch the varieties of elementary warps to which these constraints lead and show some of their combinations. The algebra into which we have cast this extension seems capable of leading us even further beyond landmarks to incorporate information from derivatives of higher order than the first. This generalization may enrich the 'multiscale' approach to medical image analysis and may provide a bridge between two current approaches to the deformable-template problem--that of low-dimensional relatively global features and that of parameters distributed locally on a grid--that are not currently linked by any effective formalism.
Our topic is the construction of a detailed parametric average of three-dimensional surface form from a sample of specimens labelled in accordance with the features of that average. The labelling, which takes the conceptual and topological form of a smooth wire mesh, is a hybrid of biological information and linearized geometric schemes applying to a variety of geometric elements: points, curves, and surface patches. We do not 'smooth' the individual specimens of a sample; instead, we carefully restrict the scope of differentiations of empirical geometric elements so as to apply only after specimens have been averaged. Likewise, while we carry out all averaging in a single Cartesian coordinate system, we carefully 'unwarp' the largest- scale aspects of biological variability within this system before proceeding with the averaging of features at smaller scales. The resulting algorithms underlie useful visualizations of 'typical' or 'normative' anatomy and its variability for consideration in formal computations of statistical atypicality and optimization of interventions and should be crucial to future approaches to image analysis via the automatic deformation of geometrically extended templates.
A system to measure the surface shape of the human body has been constructed. The system uses a fringe pattern generated by projection of multi-stripe structured light. The optical methodology used is fully described and the algorithms used to process acquired digital images are outlined. The system has been applied to the measurement of the shape of the human back in scoliosis.
We are developing noninvasive methods to evaluate bone structure in osteoporosis as demonstrated on conventional radiographs of the spine. One of these methods involves the estimation of fractal dimension of vertebral bodies in the spine of osteoporotic patients with fracture(s) elsewhere in the spin compared to patients without spine fracture. Fractal dimension was estimated using a surface 'area' method based on pixel gray level 'heights'. Analysis of the data by this method suggested a multifractal model of bone structure yielding two fractal dimensions for each case. The ability of these fractal dimensions to distinguish between cases with fracture elsewhere in the spine from those with no spine fracture was evaluated using receiver operating characteristic (ROC) curve analysis. An Az value of 0.87 using one of these fractal dimensions was significantly better than the Az of 0.60 using bone mass measurements for the same patients. The results suggest a possible value of a method using fractal dimension method for the improved prediction of fracture risk in osteoporosis patients.
We consider the issue of matching 3D objects extracted from medical images. We show that crest lines computed on the object surfaces correspond to meaningful anatomical features, and that they are stable with respect to rigid transformations. We present the current chain of algorithmic modules which automatically extract the major crest lines in 3D CT-Scan images, and then use differential invariants on these lines to register together the 3D images with a high precision. The extraction of the crest lines is done by computing up to third order derivatives of the image intensity function with appropriate 3D filtering of the volumetric images, and by the 'marching lines' algorithm. The recovered lines are then approximated by splines curves, to compute at each point a number of differential invariants. Matching is finally performed by a new geometric hashing method. The whole chain is now completely automatic, and provides extremely robust and accurate results, even in the presence of severe occlusions. In this paper, we briefly describe the whole chain of processes, already presented to evaluate the accuracy of the approach on a couple of CT-scan images of a skull containing external markers.
A methodology for task-sensitive pixel classification is defined based on multiscale Gaussian derivatives and statistical pattern recognition methods. Multiscale Gaussian derivatives are approximated by Gaussian and offset-Gaussian filters to decrease computational requirements. A method is devised for computing a discriminant vector between classes based on class isolation and compactness. The optimal discriminant vector is converted back into image form and applied to the image to determine whether a 1-D feature space is adequate to separate the classes.
A segmentation algorithm using both global and local image analysis techniques in combination with anatomical information is proposed to separate grey matter from white matter in 2-dimensional (2D) axial magnetic resonance images (MRI) brain images. The global technique used is a variant of the fuzzy c-means (FCM) algorithm. This variant only relies on pixel brightness for its clustering operation. The local techniques include methods of morphological image processing and edge detection. The anatomical information is limited to the expected order of tissue from the scalp on the exterior to the ventricles in the center. The results have proved useful in a study of regional biochemistry of the human brain using spectroscopic MRI.
Analyzing bone micromechanics has become increasingly necessary for understanding how bone structure is adapted in response to mechanical stimulus. The complex geometry of bone microstructure makes it difficult to use traditional finite element techniques for analysis. The use of finite element modeling procedure based on direct mesh generation from digital images using an Element-By-Element Preconditioned Conjugate Gradient solution technique for analyzing bone microstructure was presented. The results from digitized meshes were compared to smooth mesh results for microstructures of regular geometry to assess the digitized solutions. The digitized solutions were found to match the smooth mesh solutions well except near material boundaries where the digitized solutions exhibited numerical oscillations. A smoothing procedure utilizing Gaussian Filter techniques was investigated for smoothing solution oscillations, but results indicated that the filtering techniques did not significantly smooth the oscillation. In conclusion, digital image based finite element analysis will produce reasonably accurate estimates of stress and strain within microstructures but smoothing techniques for numerical oscillations at material boundaries need to be developed.
A stochastical algorithm to improve the visual appearance of blood-vessel images is presented. Each pixel value in the output image represents the probability that the pixel belongs to a blood-vessel. The algorithm incorporates a Metropolis sampler that approximates a posterior distribution. We first describe this algorithm and present some results. In the second part, we focus on methods to assess the sampler convergence. For a first method some versions of the sampler algorithm are executed in parallel. We propose a convergence measure based on the deviations between the parallel versions. We compare this measure with one based on the analysis of the underlying Markov chains, by applying the measures to Ising model simulations. We also examine whether the parallel samplers can be used to accelerate the algorithm.
There is an increasing interest for image motion analysis, driven by several application fields ranging from dynamic scene analysis, up to image encoding for transmission purposes. In the context of scene analysis, the image motion information is of crucial help for segmentation and qualitative interpretation. In the case of unstructured, inhomogeneous images, motion information is carried by the spatio-temporal variations of the light intensity function. The apparent distribution inferred from this information is a dense vector field, called optical flow. Assuming the spatial continuity of the field to be estimated, a local determination of optical flow is possible. The main difficulty lies in the handling of motion discontinuities. Usually, the localization of motion frontiers is handled as a binary problem, and thus leads to instabilities during the estimation process. We propose an incremental process, evaluating the optical flow from a sequence of images, using temporal Kalman filtering. Our approach is based on a continuous handling of motion frontiers. The evolution model acts as a temporal low band filter on the estimated field. To cope with motion discontinuities, the filter is continuously adapted according to local motion homogeneity, via the covariance of the model noise. The result is a progressive cancelation of temporal regularization in the neighborhood of motion frontiers, allowing a better convergence of the filter in case of such a discontinuity. Such a continuous handling of motion frontiers leads to a great robustness in the estimation process.
Edges in blurred data are modeled as the separatrices of the gradient dynamical system of the Laplacian of the input image. Such edges are detected by an algorithm which integrates the dynamical system with an omni-directional tracker. Features are extracted by a heuristic that is based on the topological behavior of the image class of the application. This approach is applied to delineate the walls of left ventricles in cardiac Thallium tomograms.
Many biological objects are elongated. This research addresses the issue of recognizing elongated objects from both 2D intensity images and 3D volumes. A mathematical model, called tube model, is developed for this class of objects and is effectively utilized in two stages of recognition. The explicit relationships between geometrical surface features and the object model parameters are quantitatively exploited to automatically locate seeds for recognition. Invariant surface features are used to constrain or hypothesize the objects of interest. The verification of a hypothesis is performed by correlating a matched filter, dynamically generated based on the hypothesis, with the sensor data. The tubes identified in such a local recognition process serve as the seeds from which the global recognition process is initiated. Each seed is swept along the trajectory where the best-fit is found. A smooth sweep is controlled by a set of adaptive constraints computed dynamically from an on-line sweeping history. We apply the proposed method to real world data from different application domains. Experimental results are presented and discussed.
The problem of edge-preserving tomographic reconstruction from Gaussian data is considered. The problem is formulated in a Bayesian framework, where the image is modeled as a pair of Markov Random Fields: a continuous-valued intensity process and a binary line process. The solution, defined as the maximizer of the posterior probability, is obtained using a Generalized Expectation-Maximization (GEM) algorithm in which both the intensity and the line processes are iteratively updated. The simulation results show that when suitable priors are assumed for the line configurations, the reconstructed images are better than those obtained without a line process, even when the number of observed data is lower. A comparison between the GEM algorithm and an algorithm based on mixed-annealing is made.
Patient motion during data acquisition in tomographic radionuclide imaging can cause severe artifacts in the reconstructed images. Assuming that patient motion can be measured and recorded using a motion monitoring device, we consider the problem of reducing the image artifacts by incorporating the motion information in the reconstruction process. The feasibility of the concept is tested using mathematical phantoms. Emission data are simulated while the phantom undergoes motion within a plane, both abrupt, from one position to another, and gradual. The simulated emission data are reconstructed using iterative statistical reconstruction methods incorporating the motion information. Severe artifacts occur when the data are reconstructed without motion correction. They are more pronounced in sudden gross motions as compared to the continuous motions. Improved reconstructions are obtained when the motion correction is applied. The correction is faster and more accurate for the sudden motions.
We develop and apply several straightforward statistical tools--analyses of rigid body motions, linear deformations, and morphological operations (histogram equalization, dilation, medial axis transform)--to single photon emission computed tomography (SPECT) images arising in the clinical investigation of brain lesions. Examples derive from a simulation of relevant clinical features using a physical head phantom and with paired SPECT images of the same patient recorded over time.
We describe detectors capable of locating small tumors of variable size in the highly textured anatomic backgrounds typical of gamma-ray images. The problem of inhomogeneous background noise is solved using a spatially adaptive statistical scaling operation, which effectively pre-whitens the data and leads to a very simple form of adaptive matched filter. Detecting tumors of variable size is accomplished by processing the images formed in a Laplacian pyramid, each of which contains a narrower range of tumor scales. We compare the performance of this pyramid technique with our earlier nonlinear detector, which detects small tumors according to their signature in curvature feature space, where 'curvature' is the local curvature of the image data when viewed as a relief map. Computed curvature values are mapped to a normalized significance space using a windowed t-statistic. The resulting test statistic is thresholded at a chosen level of significance to give a positive detection. Nonuniform anatomic background activity is effectively suppressed. This curvature detector works quite well over a large range of tumor scales, although not as well as the pyramid/adaptive matched filter scheme. None of the multiscale techniques tested perform at the level of the fixed scale detectors. Tests are performed using simulated tumors superimposed on clinical gamma-ray images.
Suitability of the wavelet transform was studied for the analysis of glucose utilization differences between subject groups as displayed in PET images. To strengthen statistical inference, it was of particular interest investigating the tradeoff between signal localization and image decomposition into uncorrelated components. This tradeoff is shown to be controlled by wavelet regularity, with the optimal compromise attained by third-order orthogonal spline wavelets. Testing of the ensuing wavelet coefficients identified only about 1.5% as statistically different (p < .05) from noise, which then served to resynthesize the difference images by the inverse wavelet transform. The resulting images displayed relatively uniform, noise-free regions of significant differences with, due to the good localization maintained by the wavelets, very little reconstruction artifacts.
In this paper we present a method for reconstructing a function f : R2 yields R from limited angle Fourier data. This problem is motivated by the limited angle tomography problem, which arises when physical limitations on the system prohibit the gathering of tomographic data at certain angles. The first portion of this paper we decompose the limited angle operator into a tensor product form which enables the computation of its inverse. The second portion of the paper uses a multiresolution analysis to mollify the problem and provide the inverse solution. It is shown that the proposed algorithm significantly outperforms the singular value decomposition of the operator.
In Time-resolved Optical Absorption and Scattering Tomography (TOAST) the imaging problem is to reconstruct the coefficients of absorption (mu)a and scattering (mu)s of light in tissue given the time-dependent photon flux at the surface of the subject, resulting from ultrafast laser input pulses. This inverse problem is mathematically similar to the Electrical Impedance problem (EIT) but presents some unique features. In particular the necessity of searching in two solution spaces requires the use of multiple data types that are maximally uncorrelated with respect to the solution spaces. We developed an algorithm for TOAST that uses an iterative non-linear gradient descent method to minimize an appropriate error norm. The algorithm can work on multiple types of data and an important topic is the choice of the best data format to use. Usually the choice is integrated intensity and mean time- of-flight for the temporal domain data. In this paper we compare these data types with the use of higher order moments of the temporal distribution (variance, skew, kurtosis). We show that reliable results must take detailed account of the confidence limits on each data point. We demonstrate how the probability distribution function for photon propagation can be calculated so that the variance of any given measurement type can be derived.
A tomographical imaging system capable of reproducing electron density distributions of an irradiated object plane via measurement of the Compton scattered radiation is presented as a potential contender for portal imaging. The method which imposes minimal constraints on the measurement apparatus unlike most scanning methods is capable of density reconstruction with limited projections of scattered photon energy spectra acquired under conditions of wide angle source and detector collimation. As such, density reconstruction becomes a nonlinear inverse problem. This problem is presented along with a discussion on potential solution strategies to confine the impact of solution instability.
A method to reconstruct vessel lumens, based on constrained reconstruction of serial cross- sections from two digital angiographic projections, is proposed. Each cross-section is reconstructed by a binary matrix from its two densitometric data projections, with ambiguities on the reconstruction removed by a priori knowledge. A probabilistic approach in which properties of the expected solution are described through a Markov Random Field (MRF) model was chosen to facilitate incorporation of a priori information on the vessel segment to be reconstructed. The best solution amongst all possible ones is obtained by the implementation of an optimization algorithm based on Simulated Annealing. An initial configuration consisting of the ellipse of best fit is constructed and then imposed to guarantee rapid convergence to the optimal solution. This initial configuration is then deformed to be made consistent with projection data while being constrained into a connected realistic shape. The MRF model parameters have been estimated on 2D synthetic slices, from systematic quality reconstruction measurements. The method provides a good reconstruction of complex shapes, and can be applied to single pathologic vessels as well as to branchings. The method thus far has only been validated on peripheral arteries and bifurcations.
A new supplementary a-priori constraint, the slow evolution from the boundary constraint, (SEB), sharply reduces noise contamination in a large class of space-invariant image deblurring problems that occur in medical, industrial, surveillance, environmental, and astronomical application. The noise suppressing properties of SEB restoration can be proved mathematically, on the basis of rigorous error bounds for the reconstruction, as a function of the noise level in the blurred image data. This analysis proceeds by reformulating the image deblurring problem into an equivalent ill-posed problem for a time-reversed diffusion equation. The SEB constraint does not require smoothness of the image. An effective, fast, non-iterative procedure, based on FFT algorithms, may be used to compute SEB restorations. For a 512 X 512 image, the procedure requires about 45 seconds of cpu time on a Sun/sparc2. A documented deblurring experiment, on an image with significant high frequency content, illustrates the computational significance of the SEB constraint by comparing SEB and Tikhonov-Miller reconstructions using optimal values of the regularization parameters.
High resolution anatomical images provided by magnetic resonance imaging or X-ray computed tomography can be registered with functional images obtained from emission tomography, to obtain accurate anatomical localization of the functional images. The anatomical data provide prior knowledge which is used to infer radionuclide concentration in the organ of interest. In this paper, a maximum a posteriori estimation method is presented which uses this information to obtain improved image reconstructions. The a prior probability distribution of the radionuclide concentration is modelled by a Gibbs distribution. The contribution here is the introduction of a new 'potential function' that facilitates coding of the prior anatomical information in the reconstruction process. Computer simulations are presented which indicate marked improvements can be achieved in reconstructions when the prior information is used.
Approaches to surface fitting can be classified according to the nature of the raw data (visible surfaces or extracted surfaces, variously crossclassified by aspects of the case or of the conditions of observation), context of measurement (are queries qualitative or quantitative? are we in the operating room or producing an Atlas of normal variation?), geometrical model (topology, position, derivatives, statistics), biometrical model (landmarks, curves, or neither), and criterion of fit (a-priori parameters, geometric or other 'distances,' or a more general 'equilibrium'). After reviewing the current approaches under these rubrics I conclude that the topic of surface fitting badly needs a shared mathematical formalism, and I modestly suggest a composite that might work: a combination of the thin-plate spline with two separate diffusion algebras, Pizer's for features of single grey-scale images and Grenander's for deformations as 'patterns.'