Calculating digitally reconstructed radiographs (DRRs)is an important step in intensity-based fluoroscopy-to-CT image registration methods. Unfortunately, the standard techniques to generate DRRs involve ray casting and run in time O(n<sup>3</sup>),where we assume that n is approximately the size (in voxels) of one side of the DRR as well as one side of the CT volume. Because of this, generation of DRRs is typically the rate-limiting step in the execution time of intensity-based fluoroscopy-to-CT registration algorithms. We address this issue by extending light field rendering techniques from the computer graphics community to generate DRRs instead of conventional rendered images. Using light fields allows most of the computation to be performed in a preprocessing step;after this precomputation step, very accurate DRRs can be generated in time O(n<sup>2</sup>). Using a light field generated from 1,024 DRRs of resolution 256×256, we can create new DRRs that appear visually identical to ones generated by conventional ray casting. Importantly, the DRRs generated using the light field are computed over 300 times faster than DRRs generated using conventional ray casting(50 vs.17,000 ms on a PC with a 2 GHz Intel Pentium 4 processor).
Segmentation of fluoroscopy images is useful for fluoroscopy-to-CT image registration. However, it is impossible to assign a unique tissue type to each pixel. Rather each pixel corresponds to an entire path of tissue types encountered along a ray from the X-ray source to the detector plate. Furthermore, there is an inherent many-to-one mapping between paths and pixel values. We address these issues by assigning to each pixel not a scalar value but a fuzzy vector of tissue probabilities. We perform this segmentation in a probabilistic way by first learning typical distributions of bone, air, and soft tissue that correspond to certain fluoroscopy image values and then assigning each value to a probability distribution over its most likely generating paths. We then evaluate this segmentation on ground truth patient data.
Registration of 2-D projection images and 3-D volume images is still a largely unsolved problem. In order to register a pre-operative CT image to an intra-operative 2-D x-ray image, one typically computes simulated x-ray images from the attenuation coefficients in the CT image (Digital Reconstructed Radiograph, DRR). The simulated images are then compared to the actual image using intensity-based similarity measures to quantify the correctness of the current relative pose. However, the spatial information present in the CT is lost in the process of computing projections. This paper first introduces a probabilistic extension to the computation of DRRs that preserves much of the spatial separability of tissues along the simulated rays. In order to handle the resulting non-scalar data in intensity-based registration, we propose a way of computing entropy-based similarity measures such as mutual information (MI) from probabilistic images. We give an initial evaluation of the feasibility of our novel image similarity measure for 2-D to 3-D registration by registering a probabilistic DRR to a deterministic DRR computed from patient data used in frameless stereotactic radiosurgery.