In this paper, we propose and validate a fully automated pipeline for simultaneous skull-stripping and lateral ventricle segmentation using T1-weighted images. The pipeline is built upon a segmentation algorithm entitled fast multi-atlas likelihood-fusion (MALF) which utilizes multiple T1 atlases that have been pre-segmented into six whole-brain labels – the gray matter, the white matter, the cerebrospinal fluid, the lateral ventricles, the skull, and the background of the entire image. This algorithm, MALF, was designed for estimating brain anatomical structures in the framework of coordinate changes via large diffeomorphisms. In the proposed pipeline, we use a variant of MALF to estimate those six whole-brain labels in the test T1-weighted image. The three tissue labels (gray matter, white matter, and cerebrospinal fluid) and the lateral ventricles are then grouped together to form a binary brain mask to which we apply morphological smoothing so as to create the final mask for brain extraction. For computational purposes, all input images to MALF are down-sampled by a factor of two. In addition, small deformations are used for the changes of coordinates. This substantially reduces the computational complexity, hence we use the term “fast MALF”. The skull-stripping performance is qualitatively evaluated on a total of 486 brain scans from a longitudinal study on Alzheimer dementia. Quantitative error analysis is carried out on 36 scans for evaluating the accuracy of the pipeline in segmenting the lateral ventricle. The volumes of the automated lateral ventricle segmentations, obtained from the proposed pipeline, are compared across three different clinical groups. The ventricle volumes from our pipeline are found to be sensitive to the diagnosis.
In this paper, we propose a method to automatically segment the corticospinal tract (CST) in diffusion tensor images (DTIs) by incorporating the anatomical features from multi-modality images generated in DTI using multiple DTI atlases. The to-be-segmented test subject, and each atlas, is comprised of images with different modalities – the mean diffusivity, the fractional anisotropy, and the images representing the three elements of the primary eigenvector. Each atlas had a paired image containing the manually delineated segmentations of the three regions of interest - the left and right CST and the background surrounding the CST. We solve the problem via maximum a posteriori estimation using generative models. Each modality image is modeled as a conditional Gaussian mixture random field, conditioned on the atlas-label pair and the local change of coordinates for each label. The expectation-maximization algorithm is used to alternatively estimate the local optimal diffeomorphisms for each label and the maximizing segmentations. The algorithm is evaluated on six subjects with a wide range of pathology. We compare the proposed method with two state-of-the-art multi-atlas based label fusion methods, against which the method displayed a high level of accuracy.