Diffusion weighted imaging (DWI) derived apparent diffusion coefficient (ADC) values are known to correlate
inversely to tumour cellularity in brain tumours. The average ADC value increases after successful chemotherapy,
radiotherapy or a combination of both and can be therewith used as a surrogate marker for treatment response.
Moreover, high and low malignant areas can be distinguished. The main purpose of our project was to develop
a software platform that enables the automated delineation and ADC quantification of different tumour sections
in a fast, objective, user independent manner. Moreover, the software platform allows for an analysis of the
probability density of the ADC in high and low malignant areas in ROIs drawn on conventional imaging to
create a ground truth. We tested an Expectation Maximization algorithm with a Gaussian mixture model to
objectively determine tumour heterogeneity in gliomas because of yielding Gaussian distributions in the different
areas. Furthermore, the algorithm was initialized by seed points in the areas of the gross tumour volume and the
data indicated that an automatic initialization should be possible. Thus automated clustering of high and low
malignant areas and subsequent ADC determination within these areas is possible yielding reproducible ADC
measurements within heterogeneous gliomas.
Quantification of diffusion tensor imaging (DTI) parameters has become an important role in the neuroimaging, neurosurgical, and neurological community as a method to identify major white matter tracts afflicted by pathology or tracts at risk for a given surgical approach. We introduce a novel framework for a reliable and robust quantification of DTI parameters, which overcomes problems of existing techniques introduced by necessary user inputs. In a first step, a hybrid clustering method is proposed that allows for extracting specific fiber bundles in a robust way. Compared to previous methods, our approach considers only local proximities of fibers and is insensitive to their global geometry. This is very useful in cases where a fiber tracking of the whole brain is not available. Our technique determines the overall number of clusters iteratively using a eigenvalue thresholding technique to detect disjoint clusters of independent fiber bundles. Afterwards, possible finer substructures based on an eigenvalue regression are determined within each bundle. In a second step, a quantification of DTI parameters of the extracted bundle is performed. We propose a method that automatically determines a 3D image where the voxel values encode the minimum distance to a reconstructed fiber. This image allows for calculating a 3D mask where each voxel within the mask corresponds to a voxel that lies in an isosurface around the fibers. The mask is used for an automatic classification between tissue classes (fiber, background, and partial volume) so that the quantification can be performed on one or more of such classes. This can be done per slice or a single DTI parameter can be determined for the whole volume which is covered by the isosurface. Our experimental tests confirm that major white matter fiber tracts may be robustly determined and can be quantified automatically. A great advantage of our framework is its easy integration into existing quantification applications so that uncertainties can be reduced, and higher intrarater- as well as interrater reliabilities can be achieved.
Computer assistance in image-based diagnosis and therapy are continuously growing fields that have gained
importance in several medical disciplines. Today, various free and commercial tools are available. However, only
few are routinely applied in clinical practice. Especially tools that provide a flsupport of the whole design
process from development and evaluation to the actual deployment in a clinical environment are missing.
In this work, we introduce a categorization of the design process into different types and fields of application.
To this end, we propose a novel framework that allows the development of software assistants that can be
integrated into the design process of new algorithms and systems. We focus on the specific features of software
prototypes that are valuable for engineers and clinicians, rather than on product development. An important
aspect in this work is the categorization of the software design process into different components. Furthermore, we
examine the interaction between these categories based on a new knowledge flow model. Finally, an encapsulation
of these tasks within an application framework is proposed. We discuss general requirements and present a layered
architecture. Several components for data- and workflow-management provide a generic functionality that can
be customized on the developer and the user level. A flexible handling of is offered through the use of a visual
programming and rapid prototyping platform. Currently, the framework is used in 15 software prototypes and
as a basis of commercial products. More than 90 clinical partners all over the world work with these tools.
Accurate and robust assessment of quantitative parameters is a key issue in many fields of medical image
analysis, and can have a direct impact on diagnosis and treatment monitoring. Especially for the analysis of
small structures such as focal lesions in patients with MS, the finite spatial resolution of imaging devices is often
a limiting factor that results in a mixture of different tissue types.
We propose a new method that allows an accurate quantification of medical image data, focusing on a
dedicated model for partial volume (PV) artifacts. Today, a widely accepted model assumption is that of a
uniformly distributed linear mixture of pure tissues. However, several publications have clearly shown that this
is not an appropriate choice in many cases. We propose a generalization of current PV models based on the Beta
distribution, yielding a more accurate quantification. Furthermore, we present a new classification scheme. Prior
knowledge obtained from a set of training data allows a robust initial estimate of the proper model parameters,
even in cases of objects with predominant PV artifacts. A maximum likelihood based clustering algorithm
is employed, resulting in a robust volume estimate. Experiments are carried out on more than 100 stylized
software phantoms as well as on realistic phantom data sets. A comparison with current mixture models shows
the capabilities of our approach.
Physical and software phantom data sets have become an integral tool during the design, implementation, and utilization of new algorithms. Unfortunately, a common research resource has not been established until now for many applications. We propose a general software assistant for the development of realistic software phantoms. Our aim is an easy to use tool with an intuitive user interface. Furthermore, we provide a publicly available software for researchers including a common basis of reference data, which facilitates a standardized and objective validation of performance and limitations of own developments as well as the comparison of different methods.
Brain tumor segmentation and quantification from MR images is a challenging task. The boundary of a tumor
and its volume are important parameters that can have direct impact on surgical treatment, radiation therapy,
or on quantitative measurements of tumor regression rates. Although a wide range of different methods has
already been proposed, a commonly accepted approach is not yet established. Today, the gold standard at many
institutions still consists of a manual tumor outlining, which is potentially subjective, and a time consuming and
We propose a new method that allows for fast multispectral segmentation of brain tumors. An efficient initialization
of the segmentation is obtained using a novel probabilistic intensity model, followed by an iterative
refinement of the initial segmentation. A progressive region growing that combines probability and distance
information provides a new, flexible tumor segmentation. In order to derive a robust model for brain tumors
that can be easily applied to a new dataset, we retain information not on the anatomical, but on the global
cross-subject intensity variability. Therefore, a set of multispectral histograms from different patient datasets
is registered onto a reference histogram using global affine and non-rigid registration methods. The probability
model is then generated from manual expert segmentations that are transferred to the histogram feature domain.
A forward and backward transformation of a manual segmentation between histogram and image domain allows
for a statistical analysis of the accuracy and robustness of the selected features. Experiments are carried out on
patient datasets with different tumor shapes, sizes, locations, and internal texture.
We introduce novel data structures and algorithms for clustering white matter fiber tracts to improve accuracy
and robustness of existing techniques. Our novel fiber grid combined with a new randomized soft-division
algorithm allows for defining the fiber similarity more precisely and efficiently than a feature space. A fine-tuning
of several parameters to a particular fiber set - as it is often required if using a feature space - becomes obsolete.
The idea is to utilize a 3D grid where each fiber point is assigned to cells with a certain weight. From this grid, an
affinity matrix representing the fiber similarity can be calculated very efficiently in time O(n) in the average case,
where n denotes the number of fibers. This is superior to feature space methods which need O(n2) time. Our novel
eigenvalue regression is capable of determining a reasonable number of clusters as it accounts for inter-cluster
connectivity. It performs a linear regression of the eigenvalues of the affinity matrix to find the point of maximum
curvature in a list of descending order. This allows for identifying inner clusters within coarse structures, which
automatically and drastically reduces the a-priori knowledge required for achieving plausible clustering results.
Our extended multiple eigenvector clustering exhibits a drastically improved robustness compared to the well-known
elongated clustering, which also includes an automatic detection of the number of clusters. We present
several examples of artificial and real fiber sets clustered by our approach to support the clinical suitability and
robustness of the proposed techniques.
Visualization and image processing of medical datasets has become an essential task for clinical diagnosis support as well as for treatment planning. In order to enable a physician to use and evaluate algorithms within a clinical setting, easily applicable software prototypes with a dedicated user interface are essential. However, substantial programming knowledge is still required today when using powerful open source libraries such as the Visualization Toolkit (VTK) or the Insight Toolkit (ITK). Moreover, these toolkits provide only limited graphical user interface functionality. In this paper, we present the visual programming and rapid prototyping platform MeVisLab which provides flexible and simple handling of visualization and image processing algorithms of VTK/ITK, Open Inventor and the MeVis Image Library by modular visual programming. No programming knowledge is required to set up image processing and visualization pipelines. Complete applications including user interfaces can be easily built within a general framework. In addition to the VTK/ITK features, MeVisLab provides a full integration of the Open Inventor library and offers a state-of-the-art integrated volume renderer. The integration of VTK/ITK algorithms is performed automatically: an XML structure is created from the toolkits' source code followed by an automatic module generation from this XML description. Thus, MeVisLab offers a one stop solution integrating VTK/ITK as modules and is suited for rapid prototyping as well as for teaching medical visualization and image analysis. The VTK/ITK integration is available as package of the free version of MeVisLab.
For risk analysis prior to interventional treatment of brain tumors it is important to identify the functional brain areas affected by the tumor and to estimate their connectivity. Fiber Tracking (FT) on Diffusion Tensor (DT) data has the potential to facilitate this task. Our work is organized in two parts. First, we derive a relationship between diffusion anisotropy and orientation uncertainty of the DT by considering image noise. In order to assess a given FT algorithm with respect to the reconstruction of locally disturbed fiber bundles, this relationship is used for the simulation of white mat-ter lesions in DT data. Then, a deflection based FT algorithm is assessed with our software phantom. The FT algorithm is modified and its parameters are adjusted in order to obtain a fiber bundle reconstruction, which is robust to local fiber disturbance. Thus, it is demonstrated how to evaluate and improve FT algorithms with respect to the reconstruction of locally disturbed fiber bundles on the basis of phantom data with known ground truth. This is expected to improve functional and structural risk analysis for the interventional treatment of brain tumors.