The diagnosis of CSF leak using MR images alone is difficult due to the inherently poor bony information on MR
images. While CT images show bones exquisitely, they lack the soft tissue contrast that is important for detecting CSF
leak. For these reasons, CT cisternography has been the preferred modality for CSF leak diagnosis despite its
invasiveness. We propose a method to fuse the CT and MR images to combine the complementary information from
each modality, which we believe will help with the diagnosis and surgical planning for patients with CSF leak, and
potentially reduce/replace the use of CT cisternography. In the first step, the user identifies three roughly corresponding
points on both the CT and MR images. A GUI was designed that allows the user to quickly navigate through the images
by reslicing the volumes interactively. After finding the CT and MR slices at approximately the same anatomical
position, the user places three markers to represent the same spatial location. In the second step, a generalized Procrustes
transform is used to compute an initial transformation that aligns the CT and MR, which is then optimized using mutual
information maximization. The CT is registered with the MR using the optimal transformation found, and the bony
masks determined from thresholding CT intensity are blended with MR images. Initial results suggest that CT/MR
fusion images are superior to unprocessed CT and MR images in diagnosing CSF leak, and a formal clinical evaluation
is being planned to assess the efficacy of fusion images.
While there are many publicly available software packages for medical image processing, making them available to end
users in clinical and research labs remains non-trivial. An even more challenging task is to mix these packages to form
pipelines that meet specific needs seamlessly, because each piece of software usually has its own input/output formats,
parameter sets, and so on. To address these issues, we are building WHIPPET (Washington Heterogeneous Image
Processing Pipeline EnvironmenT), a collaborative platform for integrating image analysis tools from different sources.
The central idea is to develop a set of Python scripts which glue the different packages together and make it possible to
connect them in processing pipelines. To achieve this, an analysis is carried out for each candidate package for
WHIPPET, describing input/output formats, parameters, ROI description methods, scripting and extensibility and
classifying its compatibility with other WHIPPET components as image file level, scripting level, function extension
level, or source code level. We then identify components that can be connected in a pipeline directly via image format
conversion. We set up a TWiki server for web-based collaboration so that component analysis and task request can be
performed online, as well as project tracking, knowledge base management, and technical support. Currently WHIPPET
includes the FSL, MIPAV, FreeSurfer, BrainSuite, Measure, DTIQuery, and 3D Slicer software packages, and is
expanding. Users have identified several needed task modules and we report on their implementation.
We report an image segmentation and registration method for studying joint morphology and kinematics from <i>in vivo</i> MRI scans and its application to the analysis of ankle joint motion. Using an MR-compatible loading device, a foot was scanned in a single neutral and seven dynamic positions including maximal flexion, rotation and inversion/eversion. A segmentation method combining graph cuts and level sets was developed which allows a user to interactively delineate 14 bones in the neutral position volume in less than 30 minutes total, including less than 10 minutes of user interaction. In the subsequent registration step, a separate rigid body transformation for each bone is obtained by registering the neutral position dataset to each of the dynamic ones, which produces an accurate description of the motion between them. We have processed six datasets, including 3 normal and 3 pathological feet. For validation our results were compared with those obtained from 3DViewnix, a semi-automatic segmentation program, and achieved good agreement in volume overlap ratios (mean: 91.57%, standard deviation: 3.58%) for all bones. Our tool requires only 1/50 and 1/150 of the user interaction time required by 3DViewnix and NIH Image Plus, respectively, an improvement that has the potential to make joint motion analysis from MRI practical in research and clinical applications.
CT-Myelography (CTM) is routinely used for planning surgery for degenerative disease of the spine, but its invasive nature, significant potential morbidity, and high costs make a noninvasive substitute desirable. We report our work on evaluating CT and MR image fusion as an alternative to CTM. Because the spine is only piecewise rigid, a multi-rigid approach to the registration of spinal CT and MR images was developed (SPIE 2004), in which the spine on CT images is first segmented into separate vertebrae, each of which is then rigidly registered with the corresponding vertebra on MR images. The results are then blended to obtain fusion images. Since they contain information from both modalities, we hypothesized that fusion images would be equivalent to CTM. To test this we selected 34 patients who had undergone MRI and CTM for degenerative disease of the cervical spine, and used the multi-rigid approach to produce fused images. A clinical vignette for each patient was created and presented along with either CT/MR fusion images or CTM images. A group of spine surgeons are asked to formulate detailed surgical plans based on each set of images, and the surgical plans are compared. A similar study assessing diagnostic agreement is being performed with neuroradiologists, who also assess the accuracy of registration. Our work to date has demonstrated the feasibility of segmentation and multi-rigid fusion in clinical cases and the acceptability of the questionnaire to physicians. Preliminary analysis of one surgeon's and one neuroradiologist’s evaluation has been performed.
We present our work on fusion of MR and CT images of the cervical spine. To achieve the required registration accuracy of approximately 1mm, the spine is treated as a collection of rigid vertebrae, and a separate rigid body transformation applied to each
(Hawkes). This in turn requires segmentation of the CT datasets into separate vertebral images, which is difficult because the narrow planes separating adjacent vertebrae are parallel to the axial plane of the CT scans. We solve this problem by evolving all the vertebral contours simultaneously using a level set method, and use contour competition to estimate the position of the vertebral edges when a clean separation between adjacent vertebrae is not seen. Contour competition is based in turn on the vertical scan principle: no part of a given vertebra is vertically below any part of an inferior vertebra. Once segmentation is complete, the individual rigid body transforms are then estimated using mutual information maximization, and the CT images of the vertebrae superimposed on the MR scans. The resultant fused images contain the bony detail of CT and the soft tissue discrimination of MR and appear to be diagnostically equivalent, or superior, to CT myelograms. A formal test of these conclusions is planned for the next phase of our work.