PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9041 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a deep learning approach for automatic detection and visual analysis of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BCa). Deep learning approaches are learn-from-data methods involving computational modeling of the learning process. This approach is similar to how human brain works using different interpretation levels or layers of most representative and useful features resulting into a hierarchical learned representation. These methods have been shown to outpace traditional approaches of most challenging problems in several areas such as speech recognition and object detection. Invasive breast cancer detection is a time consuming and challenging task primarily because it involves a pathologist scanning large swathes of benign regions to ultimately identify the areas of malignancy. Precise delineation of IDC in WSI is crucial to the subsequent estimation of grading tumor aggressiveness and predicting patient outcome. DL approaches are particularly adept at handling these types of problems, especially if a large number of samples are available for training, which would also ensure the generalizability of the learned features and classifier. The DL framework in this paper extends a number of convolutional neural networks (CNN) for visual semantic analysis of tumor regions for diagnosis support. The CNN is trained over a large amount of image patches (tissue regions) from WSI to learn a hierarchical part-based representation. The method was evaluated over a WSI dataset from 162 patients diagnosed with IDC. 113 slides were selected for training and 49 slides were held out for independent testing. Ground truth for quantitative evaluation was provided via expert delineation of the region of cancer by an expert pathologist on the digitized slides. The experimental evaluation was designed to measure classifier accuracy in detecting IDC tissue regions in WSI. Our method yielded the best quantitative results for automatic detection of IDC regions in WSI in terms of F-measure and balanced accuracy (71.80%, 84.23%), in comparison with an approach using handcrafted image features (color, texture and edges, nuclear textural and architecture), and a machine learning classifier for invasive tumor classification using a Random Forest. The best performing handcrafted features were fuzzy color histogram (67.53%, 78.74%) and RGB histogram (66.64%, 77.24%). Our results also suggest that at least some of the tissue classification mistakes (false positives and false negatives) were less due to any fundamental problems associated with the approach, than the inherent limitations in obtaining a very highly granular annotation of the diseased area of interest by an expert pathologist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Much insight into metabolic interactions, tissue growth, and tissue organization can be gained by analyzing differently stained histological serial sections. One opportunity unavailable to classic histology is three-dimensional (3D) examination and computer aided analysis of tissue samples. In this case, registration is needed to reestablish spatial correspondence between adjacent slides that is lost during the sectioning process. Furthermore, the sectioning introduces various distortions like cuts, folding, tearing, and local deformations to the tissue, which need to be corrected in order to exploit the additional information arising from the analysis of neighboring slide images. In this paper we present a novel image registration based method for reconstructing a 3D tissue block implementing a zooming strategy around a user-defined point of interest. We efficiently align consecutive slides at increasingly fine resolution up to cell level. We use a two-step approach, where after a macroscopic, coarse alignment of the slides as preprocessing, a nonlinear, elastic registration is performed to correct local, non-uniform deformations. Being driven by the optimization of the normalized gradient field (NGF) distance measure, our method is suitable for differently stained and thus multi-modal slides. We applied our method to ultra thin serial sections (2 μm) of a human lung tumor. In total 170 slides, stained alternately with four different stains, have been registered. Thorough visual inspection of virtual cuts through the reconstructed block perpendicular to the cutting plane shows accurate alignment of vessels and other tissue structures. This observation is confirmed by a quantitative analysis. Using nonlinear image registration, our method is able to correct locally varying deformations in tissue structures and exceeds the limitations of globally linear transformations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensitivity and specificity of conventional cytological methods for cancer diagnosis can be raised significantly by applying further adjuvant cytological methods. To this end, the pathologist marks regions of interest (ROI) with a felt tip pen on the microscope slide for further analysis. This paper presents algorithms for the automated detection of these ROIs, which enables further automated processing of these regions by digital pathology solutions and image analysis. For this purpose, an overview scan is obtained at low magnification. Slides from different manufacturers need to be treated, as they might contain certain regions which need to be excluded from the analysis. Therefore the slide type is identified first. Subsequently, the felt tip marks are detected automatically, and gaps appearing in the case of ROIs which have been drawn incompletely are closed. Based on the marker detection, the ROIs are obtained. The algorithms have been optimized on a training set of 82 manually annotated images. On the test set, the slide types of all but one out of 81 slides were identified correctly. A sensitivity of 98.31% and a positive predictive value of 97.48% were reached for the detection of ROIs. In combination with a slide loader or a whole slide imaging scanner as well as automated image analysis, this enables fully automated batch processing of slides.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method to detect hot spots from breast cancer slides stained for Ki67 expression. It is common practice
to use centroid of a nucleus as a surrogate representation of a cell. This often requires the detection of individual nuclei.
Once all the nuclei are detected, the hot spots are detected by clustering the centroids. For large size images, nuclei
detection is computationally demanding. Instead of detecting the individual nuclei and treating hot spot detection as a
clustering problem, we considered hot spot detection as an image filtering problem where positively stained pixels are
used to detect hot spots in breast cancer images. The method first segments the Ki-67 positive pixels using the visually
meaningful segmentation (VMS) method that we developed earlier. Then, it automatically generates an image dependent
filter to generate a density map from the segmented image. The smoothness of the density image simplifies the detection
of local maxima. The number of local maxima directly corresponds to the number of hot spots in the breast cancer
image. The method was tested on 23 different regions of interest images extracted from 10 different breast cancer slides
stained with Ki67. To determine the intra-reader variability, each image was annotated twice for hot spots by a boardcertified
pathologist with a two-week interval in between her two readings. A computer-generated hot spot region was
considered a true-positive if it agrees with either one of the two annotation sets provided by the pathologist. While the
intra-reader variability was 57%, our proposed method can correctly detect hot spots with 81% precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current methods for cancer detection rely on clinical stains, often using immunohistochemistry techniques. Pathologists then evaluate the stained tissue in order to determine cancer stage treatment options. These methods are commonly used, however they are non-quantitative and it is difficult to control for staining quality. In this paper, we propose the use of mid-infrared spectroscopic imaging to classify tissue types in tumor biopsy samples. Our goal is to augment the data available to pathologists by providing them with quantitative chemical information to aid diagnostic activities in clinical and research activities related to breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents data on the sources of variation of the widely used hematoxylin and eosin (H&E) histological
staining, as well as a new algorithm to reduce these variations in digitally scanned tissue sections. Experimental
results demonstrate that staining protocols in different laboratories and staining on different days of the week are
the major factors causing color variations in histopathological images. The proposed algorithm for standardizing
histology slides is based on an initial clustering of the image into two tissue components having different absorption
characteristics for different dyes. The color distribution for each tissue component is standardized by aligning
the 2D histogram of color distribution in the hue-saturation-density (HSD) model. Qualitative evaluation of the
proposed standardization algorithm shows that color constancy of the standardized images is improved. Quantitative
evaluation demonstrates that the algorithm outperforms competing methods. In conclusion, the paper
demonstrates that staining variations, which may potentially hamper usefulness of computer assisted analysis of
histopathological images, can be reduced considerably by applying the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital pathology systems typically consist of a slide scanner, processing software, visualization software, and finally a workstation with display for visualization of the digital slide images. This paper studies whether digital pathology images can look different when presenting them on different display systems, and whether these visual differences can result in different perceived contrast of clinically relevant features. By analyzing a set of four digital pathology images of different subspecialties on three different display systems, it was concluded that pathology images look different when visualized on different display systems. The importance of these visual differences is elucidated when they are located in areas of the digital slide that contain clinically relevant features. Based on a calculation of dE2000 differences between background and clinically relevant features, it was clear that perceived contrast of clinically relevant features is influenced by the choice of display system. Furthermore, it seems that the specific calibration target chosen for the display system has an important effect on the perceived contrast of clinically relevant features. Preliminary results suggest that calibrating to DICOM GSDF calibration performed slightly worse than sRGB, while a new experimental calibration target CSDF performed better than both DICOM GSDF and sRGB. This result is promising as it suggests that further research work could lead to better definition of an optimized calibration target for digital pathology images resulting in a positive effect on clinical performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Acquisition, Processing and Storage of Microscopic Images
A key factor in the prognosis of colorectal cancer, and its response to chemoradiotherapy, is the ratio of cancer cells to
surrounding tissue (the so called tumour:stroma ratio). Currently tumour:stroma ratio is calculated manually, by
examining H&E stained slides and counting the proportion of area of each. Virtual slides facilitate this analysis by
allowing pathologists to annotate areas of tumour on a given digital slide image, and in-house developed stereometry
tools mark random, systematic points on the slide, known as spots. These spots are examined and classified by the
pathologist. Typical analyses require a pathologist to score at least 300 spots per tumour. This is a time consuming (10-
60 minutes per case) and laborious task for the pathologist and automating this process is highly desirable.
Using an existing dataset of expert-classified spots from one colorectal cancer clinical trial, an automated tumour:stroma
detection algorithm has been trained and validated. Each spot is extracted as an image patch, and then processed for
feature extraction, identifying colour, texture, stain intensity and object characteristics. These features are used as
training data for a random forest classification algorithm, and validated against unseen image patches. This process was
repeated for multiple patch sizes. Over 82,000 such patches have been used, and results show an accuracy of 79%,
depending on image patch size. A second study examining contextual requirements for pathologist scoring was
conducted and indicates that further analysis of structures within each image patch is required in order to improve
algorithm accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is mitotic count, which involves quantifying the number of cells in the process of dividing (i.e. undergoing mitosis) at a specific point in time. Currently mitosis counting is done manually by a pathologist looking at multiple high power fields on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical or textural attributes of mitoses or features learned with convolutional neural networks (CNN). While handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely unsupervised feature generation methods, there is an appeal to attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. In this paper, we present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing performance by leveraging the disconnected feature sets. Evaluation on the public ICPR12 mitosis dataset that has 226 mitoses annotated on 35 High Power Fields (HPF, x400 magnification) by several pathologists and 15 testing HPFs yielded an F-measure of 0.7345. Apart from this being the second best performance ever recorded for this MITOS dataset, our approach is faster and requires fewer computing resources compared to extant methods, making this feasible for clinical use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vascularity represents an important element of tissue/tumor microenvironment and is implicated in tumor growth,
metastatic potential and resistence to therapy. Small blood vessels can be visualized using immunohistochemical stains
specific to vascular cells. However, currently used manual methods to assess vascular density are poorly reproducible
and are at best semi quantitative. Computer based quantitative and objective methods to measure microvessel density are
urgently needed to better understand and clinically utilize microvascular density information. We propose a new method
to quantify vascularity from images of bone marrow biopsies stained for CD34 vascular lining cells protein as a model.
The method starts by automatically segmenting the blood vessels by methods of maxlink thresholding and minimum
graph cuts. The segmentation is followed by morphological post-processing to reduce blast and small spurious objects
from the bone marrow images. To classify the images into one of the four grades, we extracted 20 features from the
segmented blood vessel images. These features include first four moments of the distribution of the area of blood vessels,
first four moments of the distribution of 1) the edge weights in the minimum spanning tree of the blood vessels, 2) the
shortest distance between blood vessels, 3) the homogeneity of the shortest distance (absolute difference in distance
between consecutive blood vessels along the shortest path) between blood vessels and 5) blood vessel orientation. The
method was tested on 26 bone marrow biopsy images stained with CD34 IHC stain, which were evaluated by three
pathologists. The pathologists took part in this study by quantifying blood vessel density using gestalt assessment in
hematopoietic bone marrow portions of bone marrow core biopsies images. To determine the intra-reader variability,
each image was graded twice by each pathologist with two-week interval in between their readings. For each image, the
ground truth (grade) was acquired through consensus among the three pathologists at the end of the study. A ranking of
the features reveals that the fourth moment of the distribution of the area of blood vessels along with the first moment of
the distribution of the shortest distance between blood vessels can correctly grade 68.2% of the bone marrow biopsies,
while the intra- and inter-reader variability among the pathologists are 66.9% and 40.0%, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourier transform infrared (FT-IR) chemical imaging has been demonstrated as a promising technique to complement histopathological assessment of biomedical tissue samples. Current histopathology practice involves preparing thin tissue sections and staining them using hematoxylin and eosin (H&E) after which a histopathologist manually assess the tissue architecture under a visible microscope. Studies have shown that there is disagreement between operators viewing the same tissue suggesting that a complementary technique for verification could improve the robustness of the evaluation, and improve patient care. FT-IR chemical imaging allows the spatial distribution of chemistry to be rapidly imaged at a high (diffraction-limited) spatial resolution where each pixel represents an area of 5.5 × 5.5 μm2 and contains a full infrared spectrum providing a chemical fingerprint which studies have shown contains the diagnostic potential to discriminate between different cell-types, and even the benign or malignant state of prostatic epithelial cells. We report a label-free (i.e. no chemical de-waxing, or staining) method of imaging large pieces of prostate tissue (typically 1 cm × 2 cm) in tens of minutes (at a rate of 0.704 × 0.704 mm2 every 14.5 s) yielding images containing millions of spectra. Due to refractive index matching between sample and surrounding paraffin, minimal signal processing is required to recover spectra with their natural profile as opposed to harsh baseline correction methods, paving the way for future quantitative analysis of biochemical signatures. The quality of the spectral information is demonstrated by building and testing an automated cell-type classifier based upon spectral features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for fast, approximate registration of whole-slide images (WSIs) of histopathology serial sections. Popular histopathology slide registration methods in the existing literature tend towards intensity-based approaches.1, 2 Further input, in the form of an approximate initial transformation to be applied to one of the two WSIs, is then usually required, and this transformation needs to be optimised. Such a transformation is not readily available in this context and thus there is a need for fast approximation of these parameters. Fast registration is achieved by comparison of the external boundaries of adjacent tissue sections, using local curvature on multiple scales to assess similarity. Our representation of curvature is a modified version of the Curvature Scale Space (CSS)3 image. We substitute zero crossings with signed local absolute maxima of curvature to improve the registration's robustness to the subtle morphological differences of adjacent sections. A pairwise matching is made between curvature maxima at scales increasing exponentially, the matching minimizes the distance between maxima pairs at each scale. The boundary points corresponding to the matched maxima
pairs are used to estimate the desired transformation. Our method is highly robust to translation, rotation, and linear scaling, and shows good performance in cases of moderate non-linear scaling. On our set of test images the algorithm shows improved reliability and processing speed in comparison to existing CSS based registration methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brain cancer surgery requires intraoperative consultation by neuropathology to guide surgical decisions regarding the
extent to which the tumor undergoes gross total resection. In this context, the differential diagnosis between
glioblastoma and metastatic cancer is challenging as the decision must be made during surgery in a short time-frame
(typically 30 minutes). We propose a method to classify glioblastoma versus metastatic cancer based on extracting
textural features from the non-nuclei region of cytologic preparations. For glioblastoma, these regions of interest are
filled with glial processes between the nuclei, which appear as anisotropic thin linear structures. For metastasis, these
regions correspond to a more homogeneous appearance, thus suitable texture features can be extracted from these
regions to distinguish between the two tissue types. In our work, we use the Discrete Wavelet Frames to characterize the
underlying texture due to its multi-resolution capability in modeling underlying texture. The textural characterization is
carried out in primarily the non-nuclei regions after nuclei regions are segmented by adapting our visually meaningful
decomposition segmentation algorithm to this problem. k-nearest neighbor method was then used to classify the features
into glioblastoma or metastasis cancer class. Experiment on 53 images (29 glioblastomas and 24 metastases) resulted in
average accuracy as high as 89.7% for glioblastoma, 87.5% for metastasis and 88.7% overall. Further studies are
underway to incorporate nuclei region features into classification on an expanded dataset, as well as expanding the
classification to more types of cancers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In contrast to imaging modalities such as magnetic resonance imaging and micro computed tomography, digital
histology reveals multiple stained tissue features at high resolution (0.25μm/pixel). However, the two-dimensional (2D)
nature of histology challenges three-dimensional (3D) quantification and visualization of the different tissue
components, cellular structures, and subcellular elements. This limitation is particularly relevant to the vasculature,
which has a complex and variable structure within tissues. The objective of this study was to perform a fully automated
3D reconstruction of histology tissue in the mouse hind limb preserving the accurate systemic orientation of the tissues,
stained with hematoxylin and immunostained for smooth muscle α actin. We performed a 3D reconstruction using
pairwise rigid registrations of 5μm thick, paraffin-embedded serial sections, digitized at 0.25μm/pixel. Each registration
was performed using the iterative closest points algorithm on blood vessel landmarks. Landmarks were vessel centroids,
determined according to a signed distance map of each pixel to a decision boundary in hue-saturation-value color space;
this decision boundary was determined based on manual annotation of a separate training set. Cell nuclei were then
automatically extracted and corresponded to refine the vessel landmark registration. Homologous nucleus landmark pairs
appearing on not more than two adjacent slides were chosen to avoid registrations which force curved or non-sectionorthogonal structures to be straight and section-orthogonal. The median accumulated target registration errors ±
interquartile ranges for the vessel landmark registration, and the nucleus landmark refinement were 43.4±42.8μm and 2.9±1.7μm, respectively (p<0.0001). Fully automatic and accurate 3D rigid reconstruction of mouse hind limb histology imaging is feasible based on extracted vasculature and nuclei.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stimulated Raman scattering (SRS) spectral microscopy is a promising imaging method, based on vibrational
spectroscopy, which can visualize biological tissues with chemical specificity. SRS spectral microscopy has been used to
obtain two-dimensional spectral images of rat liver tissue, three-dimensional images of a vessel in rat liver, and in vivo
spectral images of mouse ear skin. Various multivariate analysis techniques, such as principal component analysis and
independent component analysis, have been used to obtain spectral images. In this study, we propose a digital staining
method. This method uses SRS spectra and statistical machine learning that makes use of prior knowledge of spectral
peaks and their two-dimensional distributional patterns corresponding to the composition of tissue samples. The method
selects spectral peaks on the basis of Mahalanobis distance, which is defined as the ratio of inter-group variation to intragroup
variation. We also make use of higher-order local autocorrelations as feature values for two-dimensional
distributional patterns. This combination of techniques allows groups corresponding to different intracellular structures
to be clearly discriminated in the multidimensional feature space. We investigate the performance of our method on
mouse liver tissue samples and show that the proposed method can digitally stain each intracellular structure such as cell
nuclei, cytoplasm, and erythrocytes separately and clearly without time-consuming chemical staining processes. We
anticipate that our method could be applied to computer-aided pathological diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measurement of prostate tumour volume can inform prognosis and treatment selection, including an assessment of the
suitability and feasibility of focal therapy, which can potentially spare patients the deleterious side effects of radical
treatment. Prostate biopsy is the clinical standard for diagnosis but provides limited information regarding tumour
volume due to sparse tissue sampling. A non-invasive means for accurate determination of tumour burden could be of
clinical value and an important step toward reduction of overtreatment. Multi-parametric magnetic resonance imaging
(MPMRI) is showing promise for prostate cancer diagnosis. However, the accuracy and inter-observer variability of
prostate tumour volume estimation based on separate expert contouring of T2-weighted (T2W), dynamic contrastenhanced
(DCE), and diffusion-weighted (DW) MRI sequences acquired using an endorectal coil at 3T is currently
unknown. We investigated this question using a histologic reference standard based on a highly accurate MPMRIhistology
image registration and a smooth interpolation of planimetric tumour measurements on histology. Our results
showed that prostate tumour volumes estimated based on MPMRI consistently overestimated histological reference
tumour volumes. The variability of tumour volume estimates across the different pulse sequences exceeded interobserver
variability within any sequence. Tumour volume estimates on DCE MRI provided the lowest inter-observer
variability and the highest correlation with histology tumour volumes, whereas the apparent diffusion coefficient (ADC)
maps provided the lowest volume estimation error. If validated on a larger data set, the observed correlations could
support the development of automated prostate tumour volume segmentation algorithms as well as correction schemes
for tumour burden estimation on MPMRI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional Optical Projection Tomography (OPT) can image tissue samples both in absorption and fluorescence mode. Absorption image can show the anatomical structure of the sample, while fluorescence mode can determine specific molecular distribution. The depth of focus (DOF) of the lens in conventional OPT needs to transverse the whole sample. As a result, resolution will be poor due to the low numerical aperture (NA) needed to generate large DOF. In conventional pathology, the specimens are embedded in wax and sliced into thin slices so that high NA objective lens can be used to image the sections. In this case, the high resolution is obtained by using high NA objective lens, but 3D images can be only obtained by stitching different sections together. Here, we propose a new method that can image entire specimen without sectioning with the same high resolution as the conventional pathology. To produce high resolution that is isotropic, the original OPTM system scans the focal plane of the high NA objective through the entire specimen to produce one projection image. Then the specimen is rotated so that the subsequent projection is taken at different perspective. After all the projections are taken, 3D images are generated by the filtered back-projection method. However, the scanning rate is limited by scanning objective lens due to the large mass of the lens. Here we show a new OPTM system that scans the mirror in the conjugate image space of the object to produce projections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A proof-of-principle study was accomplished assessing the descriptive potential of two simple geometric measures (shape descriptors) applied to sets of segmented glands within images of 125 prostate cancer tissue sections. Respective measures addressing glandular shapes were (i) inverse solidity and (ii) inverse compactness. Using a classifier based on logistic regression, Gleason grades 3 and 4/5 could be differentiated with an accuracy of approx. 95%. Results suggest not only good discriminatory properties, but also robustness against gland segmentation variations. False classifications in part were caused by inadvertent Gleason grade assignments, as a-posteriori re-inspections had turned out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Eli Gibson, Mena Gaed, Thomas Hrinivich, José A. Gómez, Madeleine Moussa, Cesare Romagnoli, Jonathan Mandel, Matthew Bastian-Jordan, Derek W. Cool, et al.
Purpose: Multiparametric magnetic resonance imaging (MPMRI) supports detection and staging of prostate cancer, but the image characteristics needed for tumor boundary delineation to support focal therapy have not been widely investigated. We quantified the detectability (image contrast between tumor and non-cancerous contralateral tissue) and the localizability (image contrast between tumor and non-cancerous neighboring tissue) of Gleason score 7 (GS7) peripheral zone (PZ) tumors on MPMRI using tumor contours mapped from histology using accurate 2D–3D registration.
Methods: MPMRI [comprising T2-weighted (T2W), dynamic-contrast-enhanced (DCE), apparent diffusion coefficient (ADC) and contrast transfer coefficient images] and post-prostatectomy digitized histology images were acquired for 6 subjects. Histology contouring and grading (approved by a genitourinary pathologist) identified 7 GS7 PZ tumors. Contours were mapped to MPMRI images using semi-automated registration algorithms (combined target registration error: 2 mm). For each focus, three measurements of mean ± standard deviation of image intensity were taken on each image: tumor tissue (mT±sT), non-cancerous PZ tissue < 5 mm from the tumor (mN±sN), and non-cancerous contralateral PZ tissue (mC±sC). Detectability [D, defined as mT-mC normalized by sT and sC added in quadrature] and localizability [L, defined as mT-mN normalized by sT and sN added in quadrature] were quantified for each focus on each image.
Results: T2W images showed the strongest detectability, although detectability |D|≥1 was observed on either ADC or DCE images, or both, for all foci. Localizability on all modalities was variable; however, ADC images showed localizability |L|≥1 for 3 foci.
Conclusions: Delineation of GS7 PZ tumors on individual MPMRI images faces challenges; however, images may contain complementary information, suggesting a role for fusion of information across MPMRI images for delineation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic differential counting of leukocytes provides invaluable information to pathologist for diagnosis and treatment
of many diseases. The main objective of this paper is to detect leukocytes from a blood smear microscopic image and
classify them into their types: Neutrophil, Eosinophil, Basophil, Lymphocyte and Monocyte using features that
pathologists consider to differentiate leukocytes. Features contain color, geometric and texture features. Colors of
nucleus and cytoplasm vary among the leukocytes. Lymphocytes have single, large, round or oval and Monocytes have
singular convoluted shape nucleus. Nucleus of Eosinophils is divided into 2 segments and nucleus of Neutrophils into 2
to 5 segments. Lymphocytes often have no granules, Monocytes have tiny granules, Neutrophils have fine granules and
Eosinophils have large granules in cytoplasm. Six color features is extracted from both nucleus and cytoplasm, 6
geometric features only from nucleus and 6 statistical features and 7 moment invariants features only from cytoplasm of
leukocytes. These features are fed to support vector machine (SVM) classifiers with one to one architecture. The results
obtained by applying the proposed method on blood smear microscopic image of 10 patients including 149 white blood
cells (WBCs) indicate that correct rate for all classifiers are above 93% which is in a higher level in comparison with
previous literatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When examining a histological sample, an expert must not only identify structures at different scale and conceptual levels, i.e. cellular, tissular and organic, but also recognize and integrate the visual cues of specific pathologies and histological concepts such as “gland", “carcinoma" or “collagen". It is necessary then to code the texture and color so that the relevant information present at different scales is emphasized and preserved. In this article we propose a novel multi-scale image descriptor using dictionaries that learn and code discriminant visual elements associated with specific histological concepts. The dictionaries are built separately for each concept using sparse coding algorithms. The descriptor's discrimination capacity is evaluated using a naive strategy that assigns a particular image to the class best represented by a particular dictionary. Results show how, even using this very simple approach, average recall and precision measures of 0.81 and 0.86 were obtained for the challenging problem of classifying epidermis, eccrine glands, hair follicle and nodular carcinoma in basal skin carcinoma images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digitized histopathology images have a great potential for improving or facilitating current assessment tools in cancer
pathology. In order to develop accurate and robust automated methods, the precise segmentation of histologic objects
such epithelium, stroma, and nucleus is necessary, in the hopes of information extraction not otherwise obvious to the
subjective eye. Here, we propose a multivew boosting approach to segment histology objects of prostate tissue. Tissue
specimen images are first represented at different scales using a Gaussian kernel and converted into several forms such
HSV and La*b*. Intensity- and texture-based features are extracted from the converted images. Adopting multiview
boosting approach, we effectively learn a classifier to predict the histologic class of a pixel in a prostate tissue specimen.
The method attempts to integrate the information from multiple scales (or views). 18 prostate tissue specimens from 4
patients were employed to evaluate the new method. The method was trained on 11 tissue specimens including 75,832
epithelial and 103,453 stroma pixels and tested on 55,319 epithelial and 74,945 stroma pixels from 7 tissue specimens.
The technique showed 96.7% accuracy, and as summarized into a receiver operating characteristic (ROC) plot, the area
under the ROC curve (AUC) of 0.983 (95% CI: 0.983-0.984) was achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed an automatic technique to measure cell density in high resolution histopathology images of the
prostate, allowing for quantification of differences between tumour and benign regions of tissue. Haemotoxylin and
Eosin (H&E) stained histopathology slides from five patients were scanned at 20x magnification and annotated by an
expert pathologist. Colour deconvolution and a radial symmetry transform were used to detect cell nuclei in the images,
which were processed as a set of small tiles and combined to produce global maps of cell density. Kolmogorov-Smirnov
tests showed a significant difference in cell density distribution between tumour and benign regions of tissue for all
images analyzed (p < 0.05), suggesting that cell density may be a useful feature for segmenting tumour in un-annotated
histopathology images. ROC curves quantified the potential utility of cell density measurements in terms of specificity
and sensitivity and threshold values were investigated for their classification accuracy. Motivation for this work derives
from a larger study in which we aim to correlate ground truth histopathology with in-vivo multiparametric MRI
(mpMRI) to validate tumour location and tumour characteristics. Specifically, cell density maps will be registered with
T2-weighted MRI and ADC maps from diffusion-weighted MRI. The validated mpMRI data will then be used to
parameterise a radiobiological model for designing focal radiotherapy treatment plans for prostate cancer patients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Alzheimer's disease (AD) is the most common form of dementia in the elderly characterized by extracellular deposition of amyloid plaques (AP). Using animal models, AP loads have been manually measured from histological specimens to understand disease etiology, as well as response to treatment. Due to the manual nature of these approaches, obtaining the AP load is labourious, subjective and error prone. Automated algorithms can be designed to alleviate these challenges by objectively segmenting AP. In this paper, we focus on the development of a novel algorithm for AP segmentation based on robust preprocessing and a Type II fuzzy system. Type II fuzzy systems are much more advantageous over the traditional Type I fuzzy systems, since ambiguity in the membership function may be modeled and exploited to generate excellent segmentation results. The ambiguity in the membership function is defined as an adaptively changing parameter that is tuned based on the local contrast characteristics of the image. Using transgenic mouse brains with AP ground truth, validation studies were carried out showing a high degree of overlap and low degree of oversegmentation (0.8233 and 0.0917, respectively). The results highlight that such a framework is able to handle plaques of various types (diffuse, punctate), plaques with varying Aβ concentrations as well as intensity variation caused by treatment effects or staining variability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Use of color images in medical imaging has increased significantly the last few years. Color information is essential for applications such as ophthalmology, dermatology and clinical photography. Use of color at least brings benefits for other applications such as endoscopy, laparoscopy and digital pathology. Remarkably, as of today, there is no agreed standard on how color information needs to be visualized for medical applications. This lack of standardization results in large variability of how color images are visualized and it makes quality assurance a challenge. For this reason FDA and ICC recently organized a joint summit on color in medical imaging (CMI). At this summit, one of the suggestions was that modalities such as digital pathology could benefit from using a perceptually uniform color space (T. Kimpe, “Color Behavior of Medical Displays,” CMI presentation, May 2013). Perceptually uniform spaces have already been used for many years in the radiology community where the DICOM GSDF standard provides linearity in luminance but not in color behavior. In this paper we quantify perceptual uniformity, using CIE’s ΔE2000 as a color distance metric, of several color spaces that are typically used for medical applications. We applied our method to theoretical color spaces Gamma 1.8, 2.0, & 2.2, standard sRGB, and DICOM (correction LUT for gray applied to all primaries). In addition, we also measured color spaces (i.e., native behavior) of a high-end medical display (Barco Coronis Fusion 6MP DL, MDCC-6130), and a consumer display (Dell 1907FP). Our results indicate that sRGB & the native color space on the Barco Coronis Fusion exhibit the least non-uniformity within their group. However, the remaining degree of perceptual non-uniformity is still significant and there is room for improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing algorithms in pathology commonly include automated decision points such as classifications. While
this enables efficient automation, there is also a risk that errors are induced. A different paradigm is to use image
processing for enhancements without introducing explicit classifications. Such enhancements can help pathologists to
increase efficiency without sacrificing accuracy. In our work, this paradigm has been applied to Ki-67 hot spot detection.
Ki-67 scoring is a routine analysis to quantify the proliferation rate of tumor cells. Cell counting in the hot spot, the
region of highest concentration of positive tumor cells, is a method increasingly used in clinical routine. An obstacle for
this method is that while hot spot selection is a task suitable for low magnification, high magnification is needed to
discern positive nuclei, thus the pathologist must perform many zooming operations. We propose to address this issue by
an image processing method that increases the visibility of the positive nuclei at low magnification levels. This tool
displays the modified version at low magnification, while gradually blending into the original image at high
magnification. The tool was evaluated in a feasibility study with four pathologists targeting routine clinical use. In a task
to compare hot spot concentrations, the average accuracy was 75±4.1% using the tool and 69±4.6% without it (n=4). Feedback on the system, gathered from an observer study, indicate that the pathologists found the tool useful and fitting in their existing diagnostic process. The pathologists judged the tool to be feasible for implementation in clinical routine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Use of color images in medical imaging has increased significantly the last few years. One of the applications in which color plays an essential role is digital pathology. Remarkably, as of today there is no agreed standard on how color information needs to be processed and visualized for medical imaging applications such as digital pathology. This lack of standardization results into large variability of how color images are visualized and it makes consistency and quality assurance a challenge. For this reason FDA and ICC recently organized a joint summit on color in medical imaging. This paper focuses on the visualization and display side of the digital pathology imaging pipeline. Requirements and desired characteristics for visualization of digital pathology images are discussed in depth. Several technological alternative solutions and considered. And finally a proposal is made for a possible architecture for a display & visualization framework for digital pathology images. The main goal for making this architectural proposal is to facilitate discussion that could lead to standardization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
diagnostic standard is a pleural biopsy with subsequent histologic examination of the tissue demonstrating invasion by
the tumor. The diagnostic tissue is obtained through thoracoscopy or open thoracotomy, both being highly invasive
procedures. Thoracocenthesis, or removal of effusion fluid from the pleural space, is a far less invasive procedure that
can provide material for cytological examination. However, it is insufficient to definitively confirm or exclude the
diagnosis of malignant mesothelioma, since tissue invasion cannot be determined. In this study, we present a
computerized method to detect and classify malignant mesothelioma based on the nuclear chromatin distribution from
digital images of mesothelial cells in effusion cytology specimens. Our method aims at determining whether a set of
nuclei belonging to a patient, obtained from effusion fluid images using image segmentation, is benign or malignant, and
has a potential to eliminate the need for tissue biopsy. This method is performed by quantifying chromatin morphology
of cells using the optimal transportation (Kantorovich–Wasserstein) metric in combination with the modified Fisher
discriminant analysis, a k-nearest neighborhood classification, and a simple voting strategy. Our results show that we can
classify the data of 10 different human cases with 100% accuracy after blind cross validation. We conclude that nuclear
structure alone contains enough information to classify the malignant mesothelioma. We also conclude that the
distribution of chromatin seems to be a discriminating feature between nuclei of benign and malignant mesothelioma
cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent development of multivariate imaging techniques, such as the Toponome Imaging System (TIS), has facilitated the analysis of multiple co-localisation of proteins. This could hold the key to understanding complex phenomena such as protein-protein interaction in cancer. In this paper, we propose a Bayesian framework for cell level network analysis allowing the identification of several protein pairs having significantly higher co-expression levels in cancerous tissue samples when compared to normal colon tissue. It involves segmenting the DAPI-labeled image into cells and determining the cell phenotypes according to their protein-protein dependence profile. The cells are phenotyped using Gaussian Bayesian hierarchical clustering (GBHC) after feature selection is performed. The phenotypes are then analysed using Difference in Sums of Weighted cO-dependence Profiles (DiSWOP), which detects differences in the co-expression patterns of protein pairs. We demonstrate that the pairs highlighted by the proposed framework have high concordance with recent results using a different phenotyping method. This demonstrates that the results are independent of the clustering method used. In addition, the highlighted protein pairs are further analysed via protein interaction pathway databases and by considering the localization of high protein-protein dependence within individual samples. This suggests that the proposed approach could identify potentially functional protein complexes active in cancer progression and cell differentiation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pathologists have an expert knowledge of the classification of fibrosis. However, the differentiation of
intermediate grades (ex: F2-F3) may cause significant inter-expert variability. A quantitative morphological
marker is presented in this paper, introducing a local-based image analysis on human liver tissue slides. Having
defined hotspots in slides, the liver collagen is segmented with a color deconvolution technique. After removing
the regions of interstitial fibrosis, the fractal dimension of the fibrosis regions is computed by using the boxcounting
algorithm. As a result, a quantitative index provides information about the grade of the fibrosis regions
and thus about the tissue damage. The index does not take account of the pathological status of the patient
but it allows to discriminate accurately and objectively the intermediate grades for which the expert evaluation
is partially based on the fibrosis development. This method was used on twelve human liver biopsies (from six
different patients) using constant conditions of preparation, acquisition (same image resolution, magnification
x20) and box-counting parameters. The liver tissue slides were labeled by a pathologist using METAVIR scores.
A reasonably good correlation is observed between the METAVIR scores and the proposed morphological index
(p-value < 0:001). Furthermore, the method is reproducible and scale independent which is appropriate for
biological high resolution images. Nevertheless, further work is needed to define reference values for this index
in such a way that METAVIR subdomains will be well delimited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a work-flow for color reproduction in whole slide imaging (WSI) scanners such that the colors in the scanned images match to the actual slide color and the inter scanner variation is minimum. We describe a novel method of preparation and verification of the color phantom slide, consisting of a standard IT8- target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several ICC compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color-space. Based on the quality of color reproduction in histopathology tissue slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed work-ow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We measure objective color performance using CIE-DeltaE2000 metric, where DeltaE values below 1 is considered imperceptible. Our evaluation 14 phantom slides, manufactured according to the proposed method, show an average inter-slide color difference below 1 DeltaE. The proposed work-flow is implemented and evaluated in 35 Philips Ultra Fast Scanners (UFS). The results show that the average color difference between a scanner and the reference is 3.5 DeltaE, and among the scanners is 3.1 DeltaE. The improvement on color performance upon using the proposed method is apparent on the visual color quality of the tissues scans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.