Automated interpretation of CT scans is an important, clinically relevant area as the number of such scans is increasing rapidly and the interpretation is time consuming. Anatomy localization is an important prerequisite for any such interpretation task. This can be done by image-to-atlas registration, where the atlas serves as a reference space for annotations such as organ probability maps. Tissue type based atlases allow fast and robust processing of arbitrary CT scans. Here we present two methods which significantly improve organ localization based on tissue types. A first problem is the definition of tissue types, which until now is done heuristically based on experience. We present a method to determine suitable tissue types from sample images automatically. A second problem is the restriction of the transformation space: all prior approaches use global affine maps. We present a hierarchical strategy to refine this global affine map. For each organ or region of interest a localized tissue type atlas is computed and used for a subsequent local affine registration step. A three-fold cross validation on 311 CT images with different fields-of-view demonstrates a reduction of the organ localization error by 33%.
A fully automatic method generating a whole body atlas from CT images is presented. The atlas serves as a reference space for annotations. It is based on a large collection of partially overlapping medical images and a registration scheme. The atlas itself consists of probabilistic tissue type maps and can represent anatomical variations. The registration scheme is based on an entropy-like measure of these maps and is robust with respect to field-of-view variations. In contrast to other atlas generation methods, which typically rely on a sufficiently large set of annotations on training cases, the presented method requires only the images. An iterative refinement strategy is used to automatically stitch the images to build the atlas. <p> </p>Affine registration of unseen CT images to the probabilistic atlas can be used to transfer reference annotations, e.g. organ models for segmentation initialization or reference bounding boxes for field-of-view selection. The robustness and generality of the method is shown using a three-fold cross-validation of the registration on a set of 316 CT images of unknown content and large anatomical variability. As an example, 17 organs are annotated in the atlas reference space and their localization in the test images is evaluated. The method yields a recall (sensitivity), specificity and precision of at least 96% and thus performs excellent in comparison to competitors.
Segmentation of intracranial tumors in Magnetic Resonance (MR) data sets and classification of the tumor
tissue into vital, necrotic, and perifocal edematous areas is required in a variety of clinical applications. Manual
delineation of the tumor tissue boundaries is a tedious and error-prone task, and reproducibility is problematic.
Furthermore, tissue classification mostly requires information of several MR protocols and contrasts. Here we
present a nearly automatic segmentation and classification algorithm for intracranial tumor tissue working on a
combination of T1 weighted contrast enhanced (T1CE) and fluid attenuated inversion recovery (FLAIR) data
sets. Both data types are included in MR intracranial tumor protocols that are used in clinical routine. The
algorithm is based on a region growing technique. The main required user interaction is a mouse click to provide
the starting point. The region growing thresholds are automatically adapted to the requirements of the actual
data sets. If the segmentation result is not fully satisfying, the user is allowed to adapt the algorithmic parameters
for final fine-tuning. We developed a user interface, where the data sets can be loaded, the segmentation can be
started by a mouse click, the parameters can be amended, and the segmentation results can be saved. With this
user interface, our segmentation tool can be used in the hospital on an image processing workstation or even
directly on the MR scanner. This enables an extensive validation study. On the 20 clinical test cases of human
intracranial tumors we investigated so far, the results were satisfying in 85% of the cases.
Segmentation of brain tumours in Magnetic Resonance (MR) images and classification of the tumour tissue into
vital, necrotic, and perifocal edematous areas is required in a variety of clinical applications. Manual delineation of
the tumour tissue boundaries is a tedious and error-prone task, and the results are not reproducible. Furthermore,
tissue classification mostly requires information of several MR protocols and contrasts. Here we present a nearly
automatic segmentation and classification algorithm for brain tumour tissue working on a combination of T1
weighted contrast enhanced (T1CE) images and fluid attenuated inversion recovery (FLAIR) images. Both
image types are included in MR brain tumour protocols that are used in clinical routine. The algorithm is
based on a region growing technique, hence it is fast (ten seconds on a standard personal computer). The only
required user interaction is a mouse click for providing the starting point. The region growing parameters are
automatically adapted in the course of growing, and if a new maximum image intensity is found, the region
growing is restarted. This makes the algorithm robust, i.e. independent of the given starting point in a certain
capture range. Furthermore, we use a lossless coarse-to-fine approach, which, together with the automatic
adaptation of the parameters, can avoid leakage of the region growing procedure. We tested our algorithm on
20 cases of human glioblastoma and meningioma. In the majority of the test cases we got satisfactory results.
An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the
quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are
visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely.
This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses
general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which
even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of
the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a
special fissure feature image, and a performance evaluation over a test data set showing an average segmentation
accuracy of 1 to 3 mm.
One of the future-oriented areas of medical image processing is to develop fast and exact algorithms for image
registration. By joining multi-modal images we are able to compensate the disadvantages of one imaging modality
with the advantages of another modality. For instance, a Computed Tomography (CT) image containing the
anatomy can be combined with metabolic information of a Positron Emission Tomography (PET) image. It is
quite conceivable that a patient will not have the same position in both imaging systems. Furthermore some
regions for instance in the abdomen can vary in shape and position due to different filling of the rectum. So a
multi-modal image registration is needed to calculate a deformation field for one image in order to maximize the
similarity between the two images, described by a so-called distance measure.
In this work, we present a method to adapt a multi-modal distance measure, here mutual information (MI),
with weighting masks. These masks are used to enhance relevant image structures and suppress image regions
which otherwise would disturb the registration process. The performance of our method is tested on phantom
data and real medical images.
Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.
Dynamic contrast enhanced (DCE) MRI mammography is currently receiving much interest in clinical research. It bears the potential to discriminate between benign and malignant lesions by analysis of the contrast uptake of the lesion. However, a registration of the individual images of a contrast-uptake series is crucial in order to avoid motion artefacts in the uptake curves, which could affect the diagnosis. It is on the other hand well known from the registration literature that a registration that uses a standard similarity measure (e.g. mean sum of squared differences, cross-correlation) may cause artefacts if contrast agent is taken up between the images to be registered. Thus we propose a registration on the basis of an application-specific similarity measure that explicitly uses features of the contrast uptake. We report initial results using this registration method.
Registration of images is a crucial part of many medical imaging tasks. The problem is to find a transformation which aligns two given images. The resulting displacement fields may be for example described as a linear combination of pre-selected basis functions (parametric approach), or, as in our case, they may be computed as the solution of an associated partial differential equation (non-parametric approach). Here, the underlying functional consists of a
smoothness term ensuring that the transformation is anatomically
meaningful and a distance term describing the similarity between the two images. To be successful, the registration scheme has to be tuned for the problem under consideration. One way of incorporating user
knowledge is the employment of weighting masks into the distance
measure, and thereby enhancing or hiding dedicated image parts. In
general, these masks are based on a given segmentation of both images. We present a method which generates a weighting mask for the second image, given the mask for the first image. The scheme is based on active contours and makes use of a gradient vector flow method.
As an example application, we consider the registration of abdominal
computer tomography (CT) images used for radiation therapy. The reference image is acquired well ahead of time and is used for setting up the radiation plan. The second image is taken just before the treatment and its processing is time-critical. We show that the proposed automatic mask generation scheme yields similar results as compared to the approach based on a pre-segmentation of both images. Hence for time-critical applications, as intra-surgery registration, we are able to significantly speed up the computation by avoiding a
pre-segmentation of the second image.
Registration of medical images, i.e. the integration of two or more images into a common geometrical system of reference so that corresponding image structures correctly align, is an active field of current research. Registration algorithms in general are composed of three main building blocks: a <i>geometrical transformation</i> is applied in order to transform the images into the geometrical system of reference, a <i>similarity measure</i> puts the comparison of the images into quantifiable terms, and an <i>optimization algorithm</i> searches for that transformation that leads to optimal similarity between the images. Whereas in the literature fixed configurations of registration algorithms are investigated, here we present a modular toolbox containing several similarity measures, transformation classes and optimization strategies. Derivative-free optimization is applicable for any similarity measure, but is not fast enough in clinical practice. Hence we consider much faster derivative-based Gauss-Newton and Levenberg-Marquardt optimization algorithms that can be used in conjunction with frequently needed similarity measures for which derivatives can be easily obtained. The implemented similarity measures, geometrical transformations and optimization methods can be freely combined in order to configure a registration algorithm matching the requirements of a particular clinical application. Test examples show that particular algorithm configurations out of this toolbox allow e.g. for an improved lesion identification and localization in PET-CT or MR registration applications.