In recent years, the number and utility of 3-D rendering frameworks has grown substantially. A quantitative and
qualitative evaluation of the capabilities of a subset of these systems is important to determine the applicability
of these methods to typical medical visualization tasks. The libraries evaluated in this paper include the Java3D
Application Programming Interface (API), Java OpenGL (Jogl) API, a multi-histogram software-based rendering
method, and the WildMagic API. Volume renderer implementations using each of these frameworks were
developed using the platform-independent Java programming language. Quantitative performance measurements
(frames per second, memory usage) were used to evaluate the strengths and weaknesses of each implementation.
This paper describes a new system for semi-automatically segmenting the background, subcutaneous fat, interstitial fat,
muscle, bone, and bone marrow from magnetic resonance images (MRI's) of volunteers for a new osteoarthritis study.
Our system first creates separate right and left thigh images from a single MR image containing both legs. The
subcutaneous fat boundary is very difficult to detect in these images and is therefore interactively defined with a single
boundary. The volume within the boundary is then automatically processed with a series of clustering and
morphological operations designed to identify and classify the different tissue types required for this study. Once the
tissues have been identified, the volume of each tissue is determined and a single, false colored, segmented image
results. We quantitatively compare the segmentation in three different ways. In our first method we simply compare
the tissue volumes of the resulting segmentations performed independently on both the left and right thigh. A second
quantification method compares our results temporally with three image sets of the same volunteer made one month
apart including a month of leg disuse. Our final quantification methodology compares the volumes of different tissues
detected with our system to the results of a manual segmentation performed by a trained expert. The segmented image
results of four different volunteers using images acquired at three different times suggests that the system described in
this paper provides more consistent results than the manually segmented set. Furthermore, measurements of the left and
right thigh and temporal results for both segmentation methods follow the anticipated trend of increasing fat and
decreasing muscle over the period of disuse.
In Radio Frequency Ablation (RFA) procedures, hepatic tumor tissue is heated to a temperature where necrosis is insured. Unfortunately, recent results suggest that heating tumor tissue to necrosis is complicated because nearby major blood vessels provide a cooling effect. Therefore, it is fundamentally important for physicians to perform a careful analysis of the spatial relationship of diseased tissue to larger liver blood vessels. The liver contains many of these large vessels, which affect the RFA ablation shape and size. There are many sophisticated vasculature detection and segmentation techniques reported in the literature that identify continuous vessels as the diameter changes size and it transgresses through many bifurcation levels. However, the larger blood vessels near the treatment area are the only vessels required for proper RFA treatment plan formulation and analysis. With physician guidance and interaction, our system can segment those vessels which are most likely to affect the RFA ablations. We have found that our system provides the physician with therapeutic, geometric and spatial information necessary to accurately plan treatment of tumors near large blood vessels. The segmented liver vessels near the treatment region are also necessary for computing isolevel heating profiles used to evaluate different proposed treatment configurations.
It is fundamentally important that all cancerous cells be adequately destroyed during Radio Frequency Ablation (RFA) procedures. To help achieve this goal, probe manufacturers advise physicians to increase the treatment region by one centimeter (1cm) in all directions around the diseased tissue. This enlarged treatment region provides a buffer to insure that cancer cells that migrated into surrounding tissue are adequately treated and necrose. Even though RFA is a minimally invasive, image-guided procedure, it is difficult for physicians to confidently follow the specified treatment protocol. In this paper we visually assess an RFA treatment by comparing a registered image set containing the untreated tumor, including the 1 cm safety boundary, to that of an image set containing the treated region acquired one month after surgery. For this study, we used Computerized Tomography images as both the tumor and treated regions are visible. To align the image sets of the abdomen, we investigate three different registration techniques; an affine transform that minimizes the correlation ratio, a point (or landmark) based 3D thin-plate spline approach, and a nonlinear B-spline elastic registration methodology. We found the affine registration technique simple and easy to use because it is fully automatic. Unfortunately, this method resulted in the largest visible discrepancy between the liver in the fused images. The thin-plate spline technique required the physician to identify corresponding landmarks in both image sets, but resulted in better visual accuracy in the fused images. Finally, the non-linear, B-spline, elastic registration technique used the registration results of the thin-plate spline method as a starting point and then required a significant amount of computation to determine its transformation, but also provided the most visually accurate fused image set.
In this paper we present and evaluate the Multiscale Vessel Enhancement Filtering algorithm that has previously been reported in the literature. The criteria we use include 2D and 3D images, where sensitivity to rotation and scale are measured. We demonstrate our implementation of the algorithm on simulated and real Computed Tomography data of the liver. We find that the algorithm is relatively insensitive to vessel orientation and scale, when the tube is isolated on a dark background. Specifically, our implementation correctly detected the scale and orientation of simulated tubes. We found that many artifacts were created when the algorithm was applied to a segmented liver that contained contrast enhanced vessels.
Analytic determination of the 3D PSF of a tomosynthetic image volume
allows us to solve the ill-posed inverse problem of recovering an
image volume from a nonspecific orientation of projection views.
To restore these inherently blurred images from tuned-aperture
reconstructed volumes, we consider three approaches; direct inversion
via 3D Wiener filter restoration, regularized iterative 3D conjugate
gradient least squares, and regularized nonlinear iterative 3D
modified residual norm steepest descent with nonnegativity
constraints. From these tests we infer that all three methods
produce adequate restorations, while the nonlinear, nonnegatively constrained iterative algorithm appears to produce especially
good restorations for this problem in an efficient and stable way.
This paper describes a unique system for constructing a three-dimensional volume from a set of two-dimensional (2D) x-ray projection images based on optical aperture theory. This proprietary system known as Tuned-Aperture Computed Tomography (TACT) is novel in that only a small number of projections acquired from arbitrary or
task-specific projection angles is required for the reconstruction process. We used TACT to reconstruct a simulated phantom from seven 2D projections made with the x-ray source positioned within 30 degrees of perpendicular to a detector array. The distance from the x-ray source was also varied to change the amount of perspective distortion in each projection. Finally, we determined the
reconstruction accuracy of TACT and compared it to that of a
conventional tomosynthesis system. We found the reconstructed volumetric data sets computed with TACT to be geometrically accurate and contain significantly less visible blurring than a similar data set computed with the control technique.
Three-dimensional angiographic reconstrcution has emerged as an alternative to the traditional depiction of aneurysm angioarchitecture provided by 2-D perspective projections acquired by digital subtraction angiography (DSA) and fluoroscopy. One clinical application of research involving 3-D angiographic reconstruction is intraoperative localization and visualization during aneurysm embolization procedures. For this procedure, reconstruction quality is important for the 3-D reconstruction of anatomy as well as for the reconstrucution of intraaneurysm coils imaged endovascularly and subsequently rendered within an existing 3-D anatomic representation. Rotational angiography involves the acquisition of a series of 2-D, cone-beam projections of intracranial anatomy by a rotating x-ray gantry following a single injection of contrast media. Our investigation focuses on the practicality of using methods that employ algebraic reconstruction techniques (ART) to reconstruct 3-D data from 2-D cone-beam projections acquired using rotational angiography during embolization procedures. Important to our investigation are issues that arise within the implementation of the projection, correction and backprojection steps of the reconstruction algorithm that affect reconstruction quality. Several methods are discussed to perform accurate voxel grid projection and backprojection. Various parameters of the reconstruction algorithm implementation are also investigated. Preliminary results indicating that quality 3-D reconstructions from 2-D projections of synthetic volumes are presented. Further modifications to our implementation hold the promise of achieving accurate reconstruction results with a lower computation cost than the algorithm implemention used for this study. We have concluded that methods to extend the traditional ART algorithm for cone-beam projection acquisition produce quality 3-D reconstructions.
In this paper we provide a rigorous mathematical foundation for Tuned- Aperture Computed Tomography, a generalization of standard tomosynthesis that provides a significantly more flexible diagnostic tool. We also describe how the general TACT algorithm simplifies in important special cases, and we investigate the possibility of optimizing the algorithm by reducing the number of fiducial reference points. The key theoretical problem is how to sue information within an x-ray image to discover, after the fact, what the relative positions of the x-ray source, the patient, and the x-ray detector were when the x-ray image was created.
The advent of large (40 cm X 40 cm) flat panel x-ray detection devices has ushered in a new era of synthetically generating tomographic image sets for many diagnostic applications. Tomosynthetic image sets correspond to focal planes passing through an imaged object and are commonly generated by algebraically reconstructing (backprojecting) a set of two-dimensional (2D) projections. Tomosynthetic image sets typically contain a significant amount of cross-sectional blur. This paper describes a system for modeling the tomosynthesis process and proposes a methodology for quantitatively and qualitatively evaluating erroneous intensity values in reconstructed three-dimensional (3D) tomosynthetic image sets.
Conventional localization schemes for brachytherapy seed implants using biplane or stereoscopic projection radio- graphs can suffer form scaling distortions and poor visibility of implanted seeds, resulting in compromised source tracking and dosimetric inaccuracies. This paper proposes an alternative method for improving the visualization and thus, localization, of radiotherapy implants by synthesizing, form as few as two radiographic projections, a 3D image free of divergence artifacts. The result produces more accurate seed localization leading to improved dosimetric accuracy. Two alternative approaches are compared. The first uses orthogonal merging. The second employs the technique of tuned-aperture computed tomography (TACT), whereby 3D reconstruction is performed by shifting and adding of well-sampled projections relative to a fiducial reference system. Phantom results using nonlinear visualization methods demonstrate the applicability of localizing individual seeds for both approaches. Geometric errors are eliminated by a calibration scheme derived from the fiducial pattern that is imaged concurrently with the subject. Both merging and TACT approaches enhance seed localization by improving visualization of the seed distribution over biplanar radiographs. Unlike current methods, both alternatives demonstrate continuos one-to-one source tracking in 3D, although elimination of scaling artifacts requires more than two projections when using the merging method.
This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.
Three-dimensional (3D) visualization of tomosynthetically generated focal planes computed with conventional techniques is limited by distortions caused from nonuniform sampling and projection magnification. These distortions are inherent to diagnostic systems with fixed, off-axis sampling geometry using a proximal source of radiation. This paper describes a technique for significantly reducing these distortions by merging independently generated sets of orthogonally oriented tomosynthetic slices. Precise registration of corresponding points of the slice volumes is required by the merging process. This is achieved by compensating projective transformations of the respective slice stacks to correct for slice-specific variation in image magnification. The result is a relatively distortion-free 3D image comprised of isotropic (cubic) voxels.
This investigation defines and tests a simple, nonlinear, task-specific method for rapid tomosynthetic reconstruction of radiographic images designed to allow an increase in specificity at the expense of sensitivity. Representative lumpectomy specimens containing cancer from human breasts were radiographed with a digital mammographic machine. Resulting projective data were processed to yield a series of tomosynthetic slices distributed throughout the breast. Five board-certified radiologists compared tomographic displays of these tissues processed both linearly (control) and nonlinearly (test) and ranked them in terms of their perceived interpretability. In another task, a different set of nine observers estimated the relative depths of six holes bored in a solid Lucite block as perceived when observed in three dimensions as a tomosynthesized series of test and control slices. All participants preferred the nonlinearly generated tomosynthetic mammograms to those produced conventionally, with or without subsequent deblurring by means of iterative deconvolution. The result was similar (p less than 0.015) when the hole-depth experiment was performed objectively. We therefore conclude for certain tasks that are unduly compromised by tomosynthetic blurring, the nonlinear tomosynthetic reconstruction method described here may improve diagnostic performance with a negligible increase in cost or complexity.
The advent of spiral computed tomography (CT) has created the potential to image continuous anatomical volumes during a single breath-hold. The ability to reconstruct overlapping spiral CT images has improved through-plane resolution and contributed to improved diagnostic accuracy. When spiral CT is used to image organ systems such as the colon or airways, it is common to generate up to 500 CT images. We have developed a virtual endoscopy (VE) software system that couples computer-assisted diagnosis capabilities with volume visualization techniques to aid in the analysis of these large datasets. Despite its potential to assist in disease diagnosis, VE faces several important technical and nontechnical challenges that must be addressed before it becomes a clinical reality.
Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.
This paper presents four methods for enhancing the visual appearance of surface-rendered anatomical objects while preserving the original geometry and topology of the underlying model. The methods consist of generating normal vectors for each vertex in a surface mesh by image gradient and cross-product operations. THe normal vectors are then modified to alter lighting effects and make the surfaces appear smoother. This is accomplished using various filtering techniques that replace each original vertex normal by a weighted value. Normal vectors affect the visualization of the surface mesh because lighting models utilize the vertex normals when generating an image from a given viewpoint. Our methods alter the normal vectors in order to smooth the surfaces for easier diagnosis using medical images.
Node volume analysis is very important medically. An automatic method of segmenting the node in spiral CT x-ray images is needed to produce accurate, consistent, and efficient volume measurements. The method of active contours (snakes) is proposed here as good solution to the node segmentation problem. Optimum parameterization and search strategies for using a two-dimensional snake to find node cross-sections are described, and an energy normalization scheme which preserves important spatial variations in energy is introduced. Three-dimensional segmentation is achieved without additional operator interaction by propagating the 2D results to adjacent slices. The method gives promising segmentation results on both simulated and real node images.
In this paper we quantitatively evaluate the amount of residual registration error for different settings of an internal parameter of our surface-based multimodality registration system. The investigated parameter controls the accuracy in representing one of the surfaces to be registered. Computational complexity of our registration system is also directly related to the surface representation accuracy for a given data set. We measured registration error using a previously developed reference data set, which accurately determines registration error at many locations throughout the brain. We found a smaller registration error when the registered surface was represented more accurately. The difference in error was not as large as anticipated, suggesting that less accurate representations can be used by our system.
All retrospective image registration methods have attached to them some intrinsic estimate of registration error. However, this estimate of accuracy may not always be a good indicator of the distance between actual and estimated positions of targets within the cranial cavity. This paper describes a project whose principal goal is to use a prospective method based on fiducial markers as a 'gold standard' to perform an objective, blinded evaluation of the accuracy of several retrospective image-to-image registration techniques. Image volumes of three modalities -- CT, MR, and PET -- were taken of patients undergoing neurosurgery at Vanderbilt University Medical Center. These volumes had all traces of the fiducial markers removed, and were provided to project collaborators outside Vanderbilt, who then performed retrospective registrations on the volumes, calculating transformations from CT to MR and/or from PET to MR, and communicated their transformations to Vanderbilt where the accuracy of each registration was evaluated. In this evaluation the accuracy is measured at multiple 'regions of interest,' i.e. areas in the brain which would commonly be areas of neurological interest. A region is defined in the MR image and its centroid C is determined. Then the prospective registration is used to obtain the corresponding point C' in CT or PET. To this point the retrospective registration is then applied, producing C' in MR. Statistics are gathered on the target registration error (TRE), which is the disparity between the original point C and its corresponding point C'. A second goal of the project is to evaluate the importance of correcting geometrical distortion in MR images, by comparing the retrospective TRE in the rectified images, i.e., those which have had the distortion correction applied, with that of the same images before rectification. This paper presents preliminary results of this study along with a brief description of each registration technique and an estimate of both preparation and execution time needed to perform the registration .
We present a method to reduce the geometric distortion in fast spin echo (FSE) images to levels appropriate for guiding stereotactic pallidotomy. We quantify the accuracy of FSE- guided target localization before and after correction using a cadaver head model. These results strongly suggest that stereotactic pallidotomy can be performed using MR and stimulation responses alone, thus avoiding the morbidity associated with pallidotomy and the costs of x-ray apparatus and personnel during surgery. This method, however, must be tested further and validated clinically before it can be applied.
Grey value correlation is generally considered not to be applicable to matching of images of different modalities. In this paper we will demonstrate that, with a simple preprocessing step for the Computed Tomography (CT) images, grey value correlation can be used for matching of Magnetic Resonance Imaging (MRI) with CT images. Two simple schemes are presented for automated 3D matching of MRI and CT neuroradiological images. Both schemes involve grey value correlation of the images in order to determine the matching transformation. In both schemes the preprocessing consists of a simple intensity mapping of the original CT image only. It will be shown that the results are insensitive to considerable changes in the parameters that determine the intensity mapping. Whichever preprocessing step is chosen, the correlation method is robust and accurate. Results, compared with a skin marker-based matching technique, are shown for brain images. Additionally, results are shown for an entirely new application: matching of the cervical spine.
A system for identifying complex space objects from a sequence of wideband radar images is presented in this paper. The system is referred to as the Complex Space Object Recognition System (CSORS) and uses a data-driven approach to object recognition. The input to the system is a time sequence of range/Doppler radar images of an object in orbit as the object passes overhead. The system first processes the individual images to improve the signal-to-noise-ratio and then further processes each image to derive a set of features. The sequence of feature sets for each pass of the object is then processed to produce a three-dimensional wireframe of the object. Finally, a symbolic model representing the object is generated from the wireframe. This model is matched against a library of models and appended to the library if a match is not found.