26 June 1998 Multimodality medical image fusion: probabilistic quantification, segmentation, and registration
Author Affiliations +
Multimodality medical image fusion is becoming increasingly important in clinical applications, which involves information processing, registration and visualization of interventional and/or diagnostic images obtained from different modalities. This work is to develop a multimodality medical image fusion technique through probabilistic quantification, segmentation, and registration, based on statistical data mapping, multiple feature correlation, and probabilistic mean ergodic theorems. The goal of image fusion is to geometrically align two or more image areas/volumes so that pixels/voxels representing the same underlying anatomical structure can be superimposed meaningfully. Three steps are involved. To accurately extract the regions of interest, we developed the model supported Bayesian relaxation labeling, and edge detection and region growing integrated algorithms to segment the images into objects. After identifying the shift-invariant features (i.e., edge and region information), we provided an accurate and robust registration technique which is based on matching multiple binary feature images through a site model based image re-projection. The image was initially segmented into specified number of regions. A rough contour can be obtained by delineating and merging some of the segmented regions. We applied region growing and morphological filtering to extract the contour and get rid of some disconnected residual pixels after segmentation. The matching algorithm is implemented as follows: (1) the centroids of PET/CT and MR images are computed and then translated to the center of both images. (2) preliminary registration is performed first to determine an initial range of scaling factors and rotations, and the MR image is then resampled according to the specified parameters. (3) the total binary difference of the corresponding binary maps in both images is calculated for the selected registration parameters, and the final registration is achieved when the minimum number of mismatch pixels gives the optimal scaling factor and rotation angle. Cross-modality quantification is then performed by incorporating the probabilistic pixel memberships from one modality (e.g., MRI) and the functional activities from another modality (e.g., PET or CT) within any given region of interests. The consistent nature of the comparative clinical studies show that our matching technique result in a robust and accurate image fusion of MRI and PET/CT scans.
© (1998) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yue Joseph Wang, Yue Joseph Wang, Matthew T. Freedman, Matthew T. Freedman, Jian Hua Xuan, Jian Hua Xuan, Qinfen Zheng, Qinfen Zheng, Seong Ki Mun, Seong Ki Mun, } "Multimodality medical image fusion: probabilistic quantification, segmentation, and registration", Proc. SPIE 3335, Medical Imaging 1998: Image Display, (26 June 1998); doi: 10.1117/12.312497; https://doi.org/10.1117/12.312497

Back to Top