Multiplex brightfield immunohistochemistry (IHC) offers the potential advantage to simultaneously analyze multiple biomarkers in order to, for example, determine T-cell numbers and phenotypes in a patient’s immune response to cancer. This paper presents a fully automatic image-analysis framework to utilize multiplex assays to identify and count stained cells of interest; it was validated by comparison with multiple “gold standard” 3,3'-Diaminobenzidine (DAB) singleplex assays. Both multiplex and singleplex assays were digitized using an RGB slide scanner. The proposed image-analysis algorithms consist of 1) a novel color-deconvolution method, 2) cell candidate detection, 3) feature extraction, and 4) cell classification based on supervised machine learning. Fully automated cell counts on the singleplex images were first rigorously verified by comparing to experts’ ground truth counts: A total of 72,076 for CD3-, 34,133 for CD8-, and 2,615 for FoxP3-positive T-cells were used in this singleplex algorithm validation. Concordance correlation coefficients (CCC) of the singleplex algorithm-to-observer agreements were 0.945, 0.965, and 0.997, respectively. Then, the singleplex slides were registered to the adjacent multiplex slides and the automated cell counts for each were compared. For this validation of the multiplex assay cell counts, the CCC values were 0.914, 0.943, and 0.877 for 12,828, 2,545, and 1,647 cells, respectively; we observed good slide-to-slide agreement between multiplex and singleplex. We conclude that the proposed fully-automated image analysis can be a useful and reliable tool to assess multiplex IHC assays.
Multiplex-brightfield immunohistochemistry (IHC) staining and quantitative measurement of multiple biomarkers can
support therapeutic targeting of carcinoma-associated fibroblasts (CAF). This paper presents an automated digitalpathology
solution to simultaneously analyze multiple biomarker expressions within a single tissue section stained with
an IHC duplex assay. Our method was verified against ground truth provided by expert pathologists. In the first stage,
the automated method quantified epithelial-carcinoma cells expressing cytokeratin (CK) using robust nucleus detection
and supervised cell-by-cell classification algorithms with a combination of nucleus and contextual features. Using
fibroblast activation protein (FAP) as biomarker for CAFs, the algorithm was trained, based on ground truth obtained
from pathologists, to automatically identify tumor-associated stroma using a supervised-generation rule. The algorithm
reported distance to nearest neighbor in the populations of tumor cells and activated-stromal fibroblasts as a wholeslide
measure of spatial relationships. A total of 45 slides from six indications (breast, pancreatic, colorectal, lung, ovarian,
and head-and-neck cancers) were included for training and verification. CK-positive cells detected by the algorithm were
verified by a pathologist with good agreement (R<sup>2</sup>=0.98) to ground-truth count. For the area occupied by FAP-positive
cells, the inter-observer agreement between two sets of ground-truth measurements was R<sup>2</sup>=0.93 whereas the algorithm
reproduced the pathologists’ areas with R<sup>2</sup>=0.96. The proposed methodology enables automated image analysis to
measure spatial relationships of cells stained in an IHC-multiplex assay. Our proof-of-concept results show an automated
algorithm can be trained to reproduce the expert assessment and provide quantitative readouts that potentially support a
cutoff determination in hypothesis testing related to CAF-targeting-therapy decisions.
Automatic whole slide (WS) tissue image segmentation is an important problem in digital pathology. A conventional classification-based method (referred to as CCb method) to tackle this problem is to train a classifier on a pre-built training database (pre-built DB) obtained from a set of training WS images, and use it to classify all image pixels or image patches (test samples) in the test WS image into different tissue types. This method suffers from a major challenge in WS image analysis: the strong inter-slide tissue variability (ISTV), i.e., the variability of tissue appearance from slide to slide. Due to this ISTV, the test samples are usually very different from the training data, which is the source of misclassification. To address the ISTV, we propose a novel method, called slide-adapted classification (SAC), to extend the CCb method. We assume that in the test WS image, besides regions with high variation from the pre-built DB, there are regions with lower variation from this DB. Hence, the SAC method performs a two-stage classification: first classifies all test samples in a WS image (as done in the CCb method) and compute their classification confidence scores. Next, the samples classified with high confidence scores (samples being reliably classified due to their low variation from the pre-built DB) are combined with the pre-built DB to generate an adaptive training DB to reclassify the low confidence samples. The method is motivated by the large size of the test WS image (a large number of high confidence samples are obtained), and the lower variability between the low and high confidence samples (both belonging to the same WS image) compared to the ISTV. Using the proposed SAC method to segment a large dataset of 24 WS images, we improve the accuracy over the CCb method.
Blood-brain-barrier (BBB) breakdown is a hypothesized mechanism for hemorrhagic transformation in acute stroke. The
Patlak analysis of a Perfusion Computed Tomography (PCT) scan measures the BBB permeability, but the method
yields higher estimates when applied to the first pass of the contrast bolus compared to a delayed phase. We present a
numerical phantom that simulates vascular and parenchymal time-attenuation curves to determine the validity of
permeability measurements obtained with different acquisition protocols. A network of tubes represents the major
cerebral arteries ipsi- and contralateral to an ischemic event. These tubes branch off into smaller segments that represent
capillary beds. Blood flow in the phantom is freely defined and simulated as non-Newtonian tubular flow. Diffusion of
contrast in the vessels and permeation through vessel walls is part of the simulation. The phantom allows us to compare
the results of a permeability measurement to the simulated vessel wall status. A Patlak analysis reliably detects areas
with BBB breakdown for acquisitions of 240s duration, whereas results obtained from the first pass are biased in areas of
reduced blood flow. Compensating for differences in contrast arrival times reduces this bias and gives good estimates of
BBB permeability for PCT acquisitions of 90-150s duration.
For assessment of cerebrovascular diseases, it is beneficial to obtain three-dimensional (3D) information on vessel
morphology and hemodynamics. Rotational angiography is routinely used to determine 3D geometry, and we
recently outlined a method to estimate the blood flow waveform and mean volumetric flow rate from images
acquired using rotational angiography.
Our method uses a model of contrast agent dispersion to estimate the flow parameters from the spatial
and temporal progression of the contrast agent concentration, represented by a flow map. Artifacts due to the
rotation of the c-arm are overcome by using a reliability map. An attenuation calibration can be used to support
our method, but it might not be available in clinical practice. In this paper, we analyze the influence of the
attenuation calibration on our method. Furthermore, we concentrate on the validation of the proposed algorithm,
with particular emphasis on the influence of parameters such as the length of the analyzed vessel segment, the
frame rate of the acquisition, and the duration of the injection on accuracy.
For the validation, rotational angiographic image sequences from a computer simulation and from a phantom
experiment were used. With a mean error of about 10% for the mean volumetric flow rate and about 13% for
the blood flow waveform from the phantom experiments, we conclude that the method has the potential to give
quantitative estimates of blood flow parameters during cerebrovascular interventions which are accurate enough
to be clinically useful.
A number of image analysis tasks of the heart region have to cope
with both the problem of respiration and heart contraction. While
the heart contraction status can be estimated based on the ECG,
respiration status estimation must be based on the images themselves, unless additional devices for respiration measurements
are used. Since diaphragm motion is closely linked to respiration,
we describe a method to detect and track the diaphragm in x-ray
projections. We model the diaphragm boundary as being approximately
circular. Diaphragm detection is then based on edge detection
followed by a Hough transform for circles. To avoid that the
detection algorithm is misled by high frequency image content, we
first apply a morphological multi-scale top hat operator. A Canny
edge detector is then applied to the top hat filtered images. In the
edge images, the circle corresponding to the diaphragm boundary is
found by the Hough transform. To restrict the search in the 3D Hough
parameter space (parameters are circle center coordinates and
radius), prior anatomical knowledge about position and size of the
diaphragm for the given image acquisition geometry is taken into
account. In subsequent frames, diaphragm position and size are
predicted from previous detection and tracking results. For each
detection result, a confidence measure is computed by analyzing the
Hough parameter space with respect to the goodness of the peak
giving the circle parameters and by analyzing the coefficient of
variation of the pixel that form the circle described by the maximum
in Hough parameter space. If the confidence is not sufficiently high
-- indicating a poor fit between the Hough circle and true diaphragm
boundary -- the detection result is optionally refined by an active
The fusion of information in medical imaging relies on accurate registration of the image content coming often from different sources. One of the strongest influences on the movement of organs is the patient’s respiration. It is known, that respiration status can be measured by comparing the projection images of the chest. Since the diaphragm compresses the soft tissue above, the level of similarity to a reference projection image in extremely inhaled or exhaled status gives an indication of the patient’s respiration status. If the images to be registered are generated under different conditions the similarity with a common reference image is calculated on different scales and therefore cannot be compared directly. The proposed solution uses two reference images acquired in extremely inhaled and exhaled position. By comparing the images with two references and by combining the similarity results, changes in respiration depth between acquisitions can be detected. With normal breathing, the similarity to one of the reference images increases while the similarity to the other one decreases over time or vice versa. If the patient’s respiration exceeds the respiration span of the reference images, the similarity to both reference images decreases. By using not only the similarity values but also their derivatives over time, changes in respiration depth therefore can be detected and the image fusion algorithm can act accordingly e.g. by removing images that exceed the valid respiration span.
Percutaneous Transluminal Coronary Angioplasty is currently the preferred method for coronary artery disease treatment. Angiograms depict residual lumen, but lack information about plaque characteristics and exact geometry. During instrument positioning, intracoronary characterization at the current instrument location is desirable. By pulling back an intravascular ultrasound (IVUS) probe through a stenosis, cross-sections of the artery are acquired. These images can provide the desired characterization if they are properly registered to diagnostic angiograms or interventional fluoroscopies. The method we propose acquires fluoroscopy frames at the beginning, end, and optionally during a constant speed pullback. The IVUS probe is localized and registered to previously acquired angiograms using a compensation algorithm for heartbeat and respiration. Then, for each heart phase, the pullback path is interpolated and the corresponding IVUS frames are positioned. During the intervention the instrument is localized and registered onto the pullback path. Thus, each IVUS frame can be registered with a position on an angiogram or to an instrument location and during subsequent steps of the intervention the appropriate IVUS frames can be displayed as if an IVUS probe were present at the instrument position. The method was tested using a phantom featuring respiratory and contraction movement and an automatic pullback with constant speed. The IVUS acquisition was replaced by fibre optics and the phantom was imaged in angiographic and fluoroscopic modes. The study showed that for the phantom case it is indeed possible to register the IVUS cross-section to the interventional instrument positions to an accuracy of less than 2mm.
Coronary angiograms are pre-interventionally recorded moving X-ray images of a patient's beating heart, where the coronary arteries are made visible by a contrast medium. They serve to diagnose, e.g., stenoses, and as roadmaps during the intervention itself. Covering about three to four heart cycles, coronary angiograms consist of three underlying states: inflow, when the contrast medium flows into the vessels, filled state, when the whole vessel tree is visible and outflow, when the contrast medium is washed out. Obviously, only that part of the sequence showing the full vessel tree is useful as a roadmap. We therefore describe methods for automatic identification of these frames. To this end, a vessel map with enhanced vessels and compressed background is first computed. Vessel enhancement is based on the observation that vessels are the locally darkest oriented structures with significant motion. The vessel maps can be regarded as containing two classes, viz. (bright) vessels and (dark)background. From a histogram analysis of each vessel map image, a time-dependent feature curve is computed in which the states inflow, filled state and outflow can already visually be distinguished. We then describe two approaches to segment the feature curve into these states: the first method models the observations in each state by a polynomial, and seeks the segmentation which allows the best fit of three polynomials as measured by a Maximum-Likelihood criterion. The second method models the state sequence by a Hidden Markov model, and estimates it using the Maximum a Posteriori (MAP)-criterion. We will
present results for a number of angiograms recorded in clinical routine.
Minimally-invasive interventions are an important domain of medical real-time imaging modalities. Image processing algorithms that enhance interventional images run within hard real-time and latency constraints due to the required hand-eye coordination of physicians which perform the intervention. To support research activities,
we present a flexible software architecture that allows to transfer image enhancement algorithms from research to clinical validation. The software architecture especially pays regard to multimodality interventional scenarios where an intervention runs in close succession to the acquisition of diagnostic data. Including the additional information of such diagnostic acquisitions enables content-based image enhancement. The proposed software
architecture administers threads for a graphical user interface, data acquisition, offline preparation of diagnostic data, and the context-based real-time enhancement itself. Using this architecture, it is possible to run arbitrary complex content-based image analysis in real-time with only 9% computational overhead during the latency introducing algorithm run time. The proposed architecture is exemplified with an application for navigation support in cardiac CathLab interventions where diagnostic exposure acquisitions and interventional fluoroscopy can alternate in close succession.
In coronary x-ray angiographies, the vessels supplying the heart are imaged in a number of states uniquely determined
by a combination of the respiratory intake and the heart contraction of the patient. The angiographic frames of one
sequence represent not all possible combinations of respiration and heart contraction. A couple of applications need a
continuous and dense sampling of the state-space given by the two axes 'respiration' and 'contraction', e.g. background
removal or motion-compensated catheter navigation. We present a novel method of interpolating above the twodimensional
phase-space based on pairs of angiographic frames with similar contraction, but different respiration status.
First a hypothetical model of the respiration motion is formulated, e.g. rigid transformation or rigid translation. Then the
parameters that transform a single frame into another one with similar contraction status are calculated for a number of
frames. An iterative approach is used to reconstruct the generalized transformation function from the transformation
parameters of frame pairs. Using this function, angiographic frames of arbitrary respiration status can be generated. It is
shown that the synthesized angiographies closely match real angiographies acquired at the same combination of
contraction and respiration status.
An overlay of diagnostic angiograms and interventional fluoroscopy during minimally invasive cathlab interventions can support navigation but suffers from artifacts due to mismatch of vessels and interventional devices. Here, weak image features and strict real-time constraints do not allow for standard multi-modality registration
techniques. In the presented method, diagnostic angiograms are filtered to extract the imaged vessel structure. A distance-transform of the extracted vessels allows for fast matching with interventionally imaged devices which are extracted with fast local filters only. Competing vessel and object filters are tested on 10 diagnostic angiograms and 25 fluoroscopic frames showing a guidewire. Their performance is tested in comparison to manual segmentations. A newly presented directional stamping-filter based on anisotropic diffusion of local image patches offers the best results for vessel extraction and also improves the guidewire detection. Using these filters, the device-to-vessel match succeeds in 92% of the tested frames. This rate decreases to 75% for an initial mismatch
of 16 pixels.
The widely used DICOM 3.0 imaging protocol specifies optional tags to store specific information on modality and body region within the header: Body Part Examined and Anatomic Structure. We investigate whether this information can be used for the automated categorization of medical images, as this is an important first step for medical image retrieval. Our survey examines the headers generated by four digital image modalities (2 CTs, 2 MRIs) in clinical routine at the Aachen University Hospital within a period of four months. The manufacturing dates of the modalities range from 1995 to 1999, with software revisions from 1999 and 2000. Only one modality sets the DICOM tag Body Part Examined. 90 out of 580 images (15.5%) contained false tag entries causing a wrong categorization. This result was verified during a second evaluation period of one month one year later (562 images, 15.3% error rate). The main reason is the dependency of the tag on the examination protocol of the modality, which controls all relevant parameters of the imaging process. In routine, the clinical personnel often applies an examination protocol outside its normal context to improve the imaging quality. This is, however, done without manually adjusting the categorization specific tag values. The values specified by DICOM for the tag Body Part Examined are insufficient to encode the anatomic region precisely. Thus, an automated categorization relying on DICOM tags alone is impossible.
Segmentation is fundamental for automated analysis of medical images. However, a unified approach for evaluation does not yet exist. Gold standards are often unapplicable because they require invasive preparations or tissue extraction. Empirical evaluations only reflect the conformity of segmentation with the subjective visual expectance of users, which is underlying inter- as well as intra-observer variabilities. This paper presents a consistent approach to create synthetic but realistic images with a-priori known object boundaries (silver standards), which are suitable for optimization nd evaluation of various segmentation algorithms. Rectangular example patches are collected for each tissue (interior, exterior, and a contour zone). Fourier amplitude and phase images are stored together with the mean gray value. For silver standard generation, a reference contour is either manually given or automatically extracted form real data applying the algorithm under evaluation. For each class of tissue, the amplitude of one patch is randomly combined with the perturbed phase of another. A randomly chosen mean from the same class is superimposed to the inverse Fourier transform. Numerous silver standards are obtained form only a few texture patches of each tissue. Based on microscopy, CT, and functional MRI data, the applicability of silver standards is proven in two, three, and four dimensions. They are analyzed with respect to systematic deviations. Minor deviations occur for two dimensional images while those for three or four dimensions are larger but still acceptable.
We describe a 'learning-from-examples'-method to automatically adjust parameters for a balloon model. Our goal is to segment arbitrarily shaped objects in medical images with as little human interaction as possible. For our model, we identified six significant parameters that are adjusted with respect to certain applications. These parameters are computed from one manual segmentation drawn by a physician. (1) The maximal edge length is derived from a polygon-approximation of the manual segmentation. (2) The size of the image subset that exerts external influences on edges is set according to the scale of gradients normal to the contour. (3) The offset of the assignment from graylevels to image potentials is adjusted such that the propulsive pressure overcomes image potentials in homogeneous parts of the image. (4) The gain of this assignment is tuned to stop the contour at the border of objects of interest. (5) The strength of deformation force is computed to balance the contour at edges with ambiguous image information. (6) These parameters are computed for both, positive and negative pressure. The variation that gives the best segmentation result is chosen. The analytically derived adjustments are optimized with a genetic algorithm that evolutionarily reduces the number of misdetected pixels. The method is used on a series of histochemically stained cells. Similar segmentation quality is obtained applying both, manual and automatic parameter setting. We further use the method on laryngoscopic color image sequences, where, even for experts, the manual adjustment of parameters is not applicable.
Medical imaging modalities often provide image material in more than two dimensions. However, the analysis of voxel data sets or image sequences is usually performed using only two- dimensional methods. Furthermore, four-dimensional medical image material (sequences of stacks of images) is available already for clinical diagnoses. Contrarily, four-dimensional image processing methods are almost unknown. We present an active contour model based on balloon models that allows a coherent segmentation of image material of any desired dimension. Our model is based on linear finite elements and combines a shape representation with an iterative segmentation algorithm. Additionally, we present a novel definition for the computation of external influences to deform the model. The appearance of relevant edges in the image is defined by image potentials and a filter kernel function. The filter kernel is applied with respect to the location and orientation of finite elements. The model moves under the influence of internal and external forces and avoids collisions of finite elements in this movement. Exemplarily, we present segmentation results in 2D (radiographs), 3D (video sequence of the mouth), and 4D (synthetic image material) and compare our results with propagation methods. The new formalism for external influences allows the model to act on graylevel as well as color images without pre-filtering.
An essential part of the IRMA-project (Image Retrieval in Medical Applications) is the categorization of digitized images into predefined classes using a combination of different independent features. To obtain an automated and content-based categorization, the following features are extracted from the image data: Fourier coefficients of normalized projections are computed to supply a scale- and translation-invariant description. Furthermore, histogram information and Co-occurrence matrices are calculated to supply information about the gray value distribution and textural information. But the key part of the feature extraction is the shape information of the objects represented by an Active Shape Model. The Active Shape Model supports various form variations given by a representative training set; we use one particular Active Shape Model for each image class. These different Active Shape Models are matched on preprocessed image data with a simulated annealing optimization. The different extracted features were chosen with regard to the different characteristics of the image content. They give a comprehensive description of image content using only few different features. Using this combination of different features for categorization results in a robust classification of image data, which is a basic step towards medical archives that allow retrieval results for queries of diagnostic relevance.
Image retrieval in medical applications (IRMA) requires the cooperation of experts in the field of medicine, image analysis, feature analysis and systems engineering. A distributed developing platform was implemented to support the progress of the IRMA-system. As the concept for this system strictly separates the steps for medical image retrieval, its components can be developed separately by work groups in different departments. The development platform provides location and access transparency for its resources. These resources are images and extracted features as well as methods which all are distributed automatically between the work groups. Replications are created to avoid repeated network transfers. All resources are administered in one central database. Computationally expensive feature extraction tasks are distributed also automatically to be processed on concurring workstations of different work groups. The developing platform intensifies and simplifies the cooperation of the interdisciplinary IRMA-development- team by providing fast and automated deliveries of components from software developers to physicians for evaluation.
In the past few years, immense improvement was obtained in the field of content-based image retrieval. Nevertheless, existing systems still fail when applied to medical image databases. Simple feature-extraction algorithms that operate on the entire image for characterization of color, texture, or shape cannot be related to the descriptive semantics of medical knowledge that is extracted from images by human experts.
In clinical cytology quantitative parameters have to be extracted from a large number of biological samples to obtain diagnostically relevant and reproducible information. Computer-assisted microscopy can provide methods that increase the quality and comparability of clinical studies by reducing the subjective influence of human operators on their results. In order to guarantee the correctness of extracted parameters automatic and reliable segmentation of the samples is required. For the detection of cytological objects a novel deformable membrane model is presented which is strictly based on macroscopical mechanics and statics. This is appropriate for modeling physiological membranes, because their shape is determined exclusively by mechanical forces. The self-driven membrane converges iteratively towards a stable state, where the contrary forces are in balance. However, active contours may not yield sufficient detection quality for acquisition of quantitative parameters. Therefore, after convergence a stochastic optimization process corrects the contour according to local graylevel information. This yields a contour that is well- adapted to the local graylevel structure. Additionally, for subsequent cytometric quantifications a local measure of confidence is provided for the contour. this can be used to enhance the robustness of the extracted parameters by incorporating the confidence factors in the quantification process. The method is applied to cytological and histological samples of different magnification.