Automatic spleen segmentation on CT is challenging due to the complexity of abdominal structures. Multi-atlas
segmentation (MAS) has shown to be a promising approach to conduct spleen segmentation. To deal with the
substantial registration errors between the heterogeneous abdominal CT images, the context learning method for
performance level estimation (CLSIMPLE) method was previously proposed. The context learning method
generates a probability map for a target image using a Gaussian mixture model (GMM) as the prior in a Bayesian
framework. However, the CLSSIMPLE typically trains a single GMM from the entire heterogeneous training atlas
set. Therefore, the estimated spatial prior maps might not represent specific target images accurately. Rather than
using all training atlases, we propose an adaptive GMM based context learning technique (AGMMCL) to train the
GMM adaptively using subsets of the training data with the subsets tailored for different target images. Training sets
are selected adaptively based on the similarity between atlases and the target images using cranio-caudal length,
which is derived manually from the target image. To validate the proposed method, a heterogeneous dataset with a
large variation of spleen sizes (100 cc to 9000 cc) is used. We designate a metric of size to differentiate each group
of spleens, with 0 to 100 cc as small, 200 to 500cc as medium, 500 to 1000 cc as large, 1000 to 2000 cc as XL, and
2000 and above as XXL. From the results, AGMMCL leads to more accurate spleen segmentations by training
GMMs adaptively for different target images.
Jiaqi Liu, Yuankai Huo, Zhoubing Xu, Albert Assad, Richard G. Abramson, and Bennett A. Landman, "Multi-atlas spleen segmentation on CT using adaptive context learning," Proc. SPIE 10133, Medical Imaging 2017: Image Processing, 1013309 (Presented at SPIE Medical Imaging: February 12, 2017; Published: 24 February 2017); https://doi.org/10.1117/12.2254437.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the proceedings. They include the speaker's narration with video of the slides and animations. Most include full-text papers. Interactive, searchable transcripts and closed captioning are now available for most presentations.
Search our growing collection of more than 18,000 conference presentations, including many plenaries and keynotes.