Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue, requires laborious training on manually labeled subjects. In this work, the performance of kNN-based segmentation of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) using manual training is compared with a new method, in which training is automated using an atlas. From 12 subjects, standard T2 and PD scans and a high-resolution, high-contrast scan (Siemens T1-weighted HASTE sequence with reverse contrast) were used as feature sets. For the conventional kNN method, manual segmentations were used for training, and classifications were evaluated in a leave-one-out study. The performance as a function of the number of samples per tissue, and k was studied. For fully automated training, scans were registered to a probabilistic brain atlas. Initial training samples were randomly selected per tissue based on a threshold on the tissue probability. These initials were processed to keep the most reliable samples. Performance of the method for varying the threshold on the tissue probability method was studied. By measuring the percentage overlap (SI), classification results of both methods were validated. For conventional kNN classification, varying the number of training samples did not result in significant differences, while increasing k gave significantly better results. In the method using automated training, there is an overestimation of GM at the expense of CSF at higher thresholds on the tissue probability maps. The difference between the conventional method (k=45) and the observers was not significantly larger than inter-observer variability for all tissue types. The automated method performed slightly worse and performed equal to the observers for WM, and less for CSF and GM. From these results it can be concluded that conventional kNN classification may replace manual segmentation, and that atlas-based kNN segmentation has strong potential for fully automated segmentation, without the need of laborious manual training.
Segmentation of the left myocardium in four-dimensional (space-time)
cardiac MRI data sets is a prerequisite of many diagnostic tasks.
We propose a fully automatic method based on global minimization of an
energy functional by means of the graphcut algorithm.
Starting from automatically obtained segmentations of the left and
right ventricles and a cardiac region of interest, a spatial model is
constructed using simple and plausible assumptions.
This model is used to learn the appearance of different tissue types
by non parametric robust estimation.
Our method does not require previously trained shape or appearance
models. Processing takes 30-40s on current hardware.
We evaluated our method on 11 clinical cardiac MRI data sets acquired
using cine balanced fast field echo. Linear regression of the
automatically segmented myocardium volume against manual segmentations
(performed by a radiologist) showed an RMS error of about 12ml.
The automatic segmentation of the heart's two ventricles from dynamic
("cine") cardiac anatomical images, such as 3D+time short-axis MRI, is of significant clinical importance. Previously published automated
methods have various disadvantages for routine clinical use. This work reports about a novel automatic segmentation method that is very fast, and robust against anatomical variability and image contrast variations. The method is mostly image-driven: it fully exploits the information provided by modern 4D (3D+time) balanced Fast Field Echo (bFFE) cardiac anatomical MRI, and makes only few and plausible assumptions about the images and the imaged heart. Specifically, the method does not need any geometrical shape models nor complex gray-level appearance models. The method simply uses the two ventricles' contraction-expansion cycle, as well as the ventricles' spatial coherence along the time dimension. The performance of the cardiac ventricles segmentation method was demonstrated through a qualitative visual validation on 32 clinical exams: no gross failures for the left-ventricle (right-ventricle) on 32 (30) of the exams were found. Also, a clinical validation of resulting quantitative cardiac functional parameters was performed against a manual quantification of 18 exams; the automatically computed Ejection Fraction (EF) correlated well to the manually computed one: linear regression with RMS=3.7% (RMS expressed in EF units).
Proc. SPIE. 5370, Medical Imaging 2004: Image Processing
KEYWORDS: Data modeling, Tissues, Magnetic resonance imaging, Image segmentation, Heart, 3D modeling, Monte Carlo methods, Motion models, Expectation maximization algorithms, Cardiovascular magnetic resonance imaging
The quantitative analysis of cardiac cine MRI sequences requires automated, robust, and fast image processing algorithms for the 4D (3D + time) segmentation of the heart chambers. The use of shape models has proven efficient in extracting the cardiac volumes for single phases, but less attention has been focused on incorporating prior knowledge about the cardiac motion. To explicitly address the temporal aspect of the segmentation problem, this paper proposes a full Bayesian model, where the prior information is represented by a cardiac shape and motion model. In this framework, the solution of the segmentation is defined by means of a probability distribution over the parameters of the space-time problem. The computed solution, obtained by means of sequential Monte Carlo techniques, has the advantage of being both spatially and temporally coherent. Furthermore, the method does not require any particular representation of the shape or of the motion model; it is therefore generic and highly flexible.
Intensity based registration algorithms have proved to be accurate and robust for 3D-3D registration tasks. However, these methods utilise the information content within an image, and therefore their performance is hindered for image data that is sparse. This is the case for the registration of a single image slice to a 3D image volume. There are some important applications that could benefit from improved slice-to-volume registration, for example, the planning of magnetic resonance (MR) scans or cardiac MR imaging, where images are acquired as stacks of single slices. We have developed and validated an information based slice-to-volume registration algorithm that uses vector valued probabilistic images of tissue classification that have been derived from the original intensity images. We believe that using such methods inherently incorporates into the registration framework more information about the images, especially in images containing severe partial volume artifacts. Initial experimental results indicate that the suggested method can achieve a more robust registration compared to standard intensity based methods for the rigid registration of a single thick brain MR slice, containing severe partial volume artifacts in the through-plane direction, to a complete 3D MR brain volume.