PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume XXXXX, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rendering cancer diagnoses from biopsy slides involves challenging tasks for pathologists, such as detecting micro metastases in tissue biopsies, or distinguishing tumors from benign tissue that can look deceivingly similar. These tasks are typically very difficult for humans, and, consequently, over- and under-diagnoses are not uncommon, resulting in non-optimal treatment. Algorithmic approaches for pathology, on the other hand, face their own set of challenges in the form of gigapixel images, proprietary data formats, and low availability of digitized images let alone high quality labels. However, advances in deep learning, access to cloud based storage, and the recent FDA approval of the first whole slide image scanner for primary diagnosis now set the stage for a new era of digital pathology. This talk will discuss the potential of deep learning to improve the accuracy and availability of cancer diagnostics, and highlight some recent advances towards that goal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Immunohistochemical staining (IHC) of tissue sections is routinely used in pathology to diagnose and characterize malignant tumors. Unfortunately, in the majority of cases, IHC stain interpretation is completed by a trained pathologist using a manual method, which consists of counting each positively and negatively stained cell under a microscope. Even in the hands of expert pathologists, the manual enumeration suffers from poor reproducibility. In this study, we propose a novel method to create artificial datasets in silico with known ground truth, allowing us to analyze the accuracy, precision, and intra- and inter-observer variability in a systematic manner and compare different computer analysis approaches. Our approach employs conditional Generative Adversarial Networks. We created our dataset by using 32 different breast cancer patients' Ki67 stained tissues. Our experiments indicated that synthetic images are indistinguishable from real images: The accuracy of five experts (3 pathologists and 2 image analysts) in distinguishing between 15 real and 15 synthetic images was only 47.3% (±8.5%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Performance of image analysis algorithms in digital pathology whole slide images (WSI) is usually hampered by the stain variations cross images. To overcome such difficulties, many stain normalization methods have been proposed where normalization is applied to all the stains in the image. However, for immunohistochemistry (IHC) images, there exist situations where not all the stains in the images are desired or feasible to be normalized, especially when the stain variations relate to certain biological indications. In contrast, the counter stain, usually hematoxylin (HTX), is always desired to be consistent cross images for robust nuclei detection. In this work, we present a framework to normalize the HTX stain in an IHC WSI through alignment to a template IHC WSI. For this purpose, we use the Hue-Saturation- Density (HSD) model and align the chromatic components distribution of the image to the template. Then we shift and scale density component to match the template. In order to retain the non-HTX stain, we differentiate the pixels which have pure HTX stain from those which are mixture of HTX and non-HTX stains, and different normalization strategy is applied accordingly. In the results, for a wide range of HTX stain variations, we show qualitatively the performance of the method. We also show algorithm performance dependence on the stain concentration can be much reduced by the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiplexed imaging such as multicolor immunofluorescence staining, multiplexed immunohistochemistry (mIHC) or cyclic immunofluorescence (cycIF) enables deep assessment of cellular complexity in situ and, in conjunction with standard histology stains like hematoxylin and eosin (H and E), can help to unravel the complex molecular relationships and spatial interdependencies that undergird disease states. However, these multiplexed imaging methods are costly and can degrade both tissue quality and antigenicity with each successive cycle of staining. In addition, computationally intensive image processing such as image registration across multiple channels is required. We have developed a novel method, speedy histopathological-to-immunofluorescent translation (SHIFT) of whole slide images (WSIs) using conditional generative adversarial networks (cGANs). This approach is rooted in the assumption that specific patterns captured in IF images by stains like DAPI, pan-cytokeratin (panCK), or α-smooth muscle actin (α-SMA) are encoded in H and E images, such that a SHIFT model can learn useful feature representations or architectural patterns in the H and E stain that help generate relevant IF stain patterns. We demonstrate that the proposed method is capable of generating realistic tumor marker IF WSIs conditioned on corresponding H and E-stained WSIs with up to 94.5% accuracy in a matter of seconds. Thus, this method has the potential to not only improve our understanding of the mapping of histological and morphological profiles into protein expression profiles, but also greatly increase the efficiency of diagnostic and prognostic decision-making.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Follicular Lymphoma (FL) is the second most common subtype of lymphoma in the Western World. In 2009, about 15,000 new cases of FL were diagnosed in the U.S. and approximately 120,000 patients were affected. Both the clinical course and prognosis of FL are variable, and at present, oncologists do not have evidence-based systems to assess risk and make individualized treatment choices. Our goal is to develop a clinically relevant, pathology-based prognostic model in FL utilizing a computer-assisted image analysis (CaIA) system to incorporate grade, tumor microenvironment, and immunohistochemical markers, thereby improving upon the existing prognostic models. Therefore, we developed an approach to estimate the outcome of the follicular lymphoma patients by analyzing the tumor microenvironment as represented by quantification of CD4, CD8, FoxP3 and Ki67 stains intra- and inter-follicular regions. In our experiments, we analyzed 15 patients, and we were able to correctly estimate the output for the 87.5% of the patient with no evidence of disease after the therapy/operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rapid digitization of whole-slide images (WSIs) with slide scanners, along with the advancements in deep learning strategies has empowered the development of computerized image analysis algorithms for automated diagnosis, prognosis, and prediction of various types of cancers in digital pathology. These analyses can be enhanced and expedited by confining them to relevant tumor region on the large-sized and multi-resolution WSIs. The detection of tumor-region-of-interest (TRoI) on WSIs can facilitate to automatically measure the tumor size as well as to compute the distance to the resection margin. It can also ease the process of identifying high-power-fields (HPFs), which are essential towards the grading of tumor proliferation scores. In practice, pathologists select these regions by visual inspection of WSIs, which is a cumbersome, time-consuming process and affected by inter- and intra- pathologist variability. State-of-the-art deep learning-based methods perform well on the TRoI detection task by using supervised algorithms, however, they require accurate TRoI and non-TRoI annotations to train the algorithms. Acquiring such annotations is a tedious task and incurs observational variability. In this work, we propose a positive and unlabeled learning approach that uses a few examples of HPF regions (positive annotations) to localize the invasive TRoIs on breast cancer WSIs. We use unsupervised deep autoencoders with Gaussian Mixture Model-based clustering to identify the TRoI in a patch-wise manner. The algorithm is developed using 90 HPF-annotated WSIs and is validated on 30 fully-annotated WSIs. It yielded a Dice coefficient of 75.21%, a true positive rate of 78.62% and a true negative rate of 97.48% in terms of pixel-bypixel evaluation compared to the pathologists annotations. Significant correspondence between the results of the proposed algorithm and the state-of-the-art supervised ConvNet indicates the efficacy of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A neutrophil is a type of white blood cell that is responsible for killing pathogenic bacteria but may simultaneously damage host tissue. We established a method to automatically detect neutrophils from slides stained with hematoxylin and eosin (H and E), because there is growing evidence that neutrophils, which respond to Mycobacterium tuberculosis, are cellular biomarkers of lung damage in tuberculosis. The proposed method relies on transfer learning to reuse features extracted from the activation of a deep convolutional network trained on a large dataset. We present a methodology to identify the correct tile size, magnification, and the number of tiles using multidimensional scaling to efficiently train the final layer of this pre-trained network. The method was trained on tiles acquired from 12 whole slide images, resulting in an average accuracy of 93.0%. The trained system successfully identified all neutrophil clusters on an independent dataset of 53 images. The method can be used to automatically, accurately, and efficiently count the number of neutrophil sites in regionsof-interest extracted from whole slide images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large, high-quality training datasets are necessary for machine learning classifiers to achieve high performance. Due to the high cost of collecting quality annotated data, dataset sizes for medical imaging applications are typically small and collected at a single institution. The use of small, single-site datasets results in classifiers that do not generalize well to data collected at different institutions or under different imaging protocols. Previous attempts to address this problem resulted in development of transfer learning and domain adaptation algorithms. Our work investigates the improvement of generalization performance by increasing training data variability. We use data from multiple sites (one from a local clinic and two from publicly available sets) to train support vector machines (SVMs) and Convolutional Neural Networks (CNNs) to distinguish tissue patches of hematoxylin and eosin (H&E) stained tissue of colorectal cancer (CRC). To measure the effect of increasing training set variability on classifier robustness, we create different training combinations of two datasets for training and validation, and use the third set is reserved for testing. SVM accuracy on the testing dataset ranged from 50% to 59% when training with data from a single site, which increases to 61% when data from both sites was combined in training. Using CNNs, the testing accuracy was 56% and 67% when training on single-site data, which increased to 70% with data from both sites. Thus, the increase in generalization performance exists for both traditional and deep learning algorithms, and is essential for building larger datasets for medical image classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The glomerulus is the primary compartment of blood filtration in the kidney. It is a sphere of bundled, fenestrated capillaries that selectively allows solute loss. Structural damages to glomerular micro-compartments lead to physiological failures which influence filtration efficacy. The sole way to confirm glomerular structural damage in renal pathology is by examining histopathological or immunofluorescence stained needle biopsies under a light microscope. However, this method is extremely tedious and time consuming, and requires manual scoring on the number and volume of structures. Computational image analysis is the perfect tool to ease this burden. The major obstacle to development of digital histopathological quantification protocols for renal pathology is the extreme heterogeneity present within kidney tissue. Here we present an automated computational pipeline to 1) segment glomerular compartment boundaries and 2) quantify features of compartments, in healthy and diseased renal tissue. The segmentation involves a two stage process, one step for rough segmentation generation and another for refinement. Using a Naïve Bayesian classifier on the resulting feature set, this method was able to distinguish pathological stage IIa from III with 0.89/0.93 sensitivity/specificity and stage IIb from III with 0.7/0.8 sensitivity/specificity, on n = 514 glomeruli taken from n = 13 human biopsies with diagnosed diabetic nephropathy, and n = 5 human renal tissues with no histological abnormalities. Our method will simplify computational partitioning of glomerular micro-compartments and subsequent quantification. We aim for our methods to ease manual labor associated with clinical diagnosis of renal disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In diabetic nephropathy (DN), hyperglycemia drives a progressive thickening of and damage to the glomerular filtration surfaces, as well as mesangial expansion and a constriction of capillary lumens. This leads at first to high blood pressure, increased glomerular filtration and micro-proteinuria, and later (if untreated) to severe proteinuria and end-stage renal disease (ESRD). Though, it is well known that DN is accompanied by marked histopathological changes, the assessment of these structural changes is to a degree subjective and hence varies between pathologists. In this work, we make a first study of glomerular changes in DN from a graph-theoretical and distance-based standpoint, using minimal spanning trees (MSTs) and distance matrices to generate statistical distributions that can potentially provide a “fingerprint” of DN. We apply these tools to detect notable differences between normal and DN glomeruli in both human disease and in a streptozotocin-induced (STZ) mouse model. We also introduce an automated pipeline for rapidly generating MSTs and evaluating their properties with respect to DN, and make a first pass at three-dimensional MST structures. We envision these approaches may provide a better understanding not only of the processes underway in DN progression, but of key differences between actual human disease and current experimental models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The adoption of deep learning techniques in medical applications has thus far been limited by the availability of the large labeled datasets required to robustly train neural networks, as well as difficulty interpreting these networks. However, recent techniques for unsupervised training of neural networks promise to address these issues, leveraging only structure to model input data. We propose the use of a variational autoencoder (VAE) which utilizes data from an animal model to augment the training set and non-linear dimensionality reduction to map this data to human sets. This architecture utilizes variational inference, performed on latent parameters, to statistically model the probability distribution of training data in a latent feature space. We show the feasibility of VAEs, using images of mouse and human renal glomeruli from various pathological stages of diabetic nephropathy (DN), to model the progression of structural changes which occur in DN. When plotted in a 2-dimentional latent space, human and mouse glomeruli, show separation with some overlap, suggesting that the data is continuous, and can be statistically correlated. When DN stage is plotted in this latent space, trends in disease pathology are visualized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Identifying patients who are high-risk for biochemical recurrence (BCR) following radical prostatectomy could enable direction of adjuvant therapy to those patients while sparing low-risk patients the side effects of treatment. Current BCR prediction tools require human judgment, limiting repeatability and accuracy. Quantitative histomorphometry (QH) is the extraction of quantitative descriptors of morphology and texture from digitized tissue slides. These features are used in conjunction with machine learning classifiers for disease diagnosis and prediction. Features quantifying gland orientation disorder have been found to be predictive of BCR. Separately, staining intensity of NF-κB protein family member RelA/p65, which regulates cell growth, apoptosis, and angiogensis, has been connected to BCR. In this study we combine nuclear NF-ΚB/p65 and H and E gland morphology features to structurally and functionally characterize prostate cancer. This enables description of cancer phenotypes according to cellular molecular profile and social behavior. We collected radical prostatectomy specimens from 21 patients, 7 of whom experienced BCR (prostate specific antigen >; .2 ng/ml) within two years of surgery. Our goal was to demonstrate the value of combining morphological and functional information for BCR prediction. Firstly, we used the top two features from each stain channel via the Wilcoxon rank-sum test using a leave-one-out cross validation approach in conjunction with a linear discriminant analysis classifier. Secondly we used the product of the posterior class probabilities from each classifier to produce an aggregate classifier. Accuracy was 0.76 with H and E features alone, 0.71 with NF-κB/p65 features alone, and 0.81 via the aggregate model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysis of tumour cells is essential for morphological characterisation which is useful for disease prognosis and survival prediction. Visual assessment of tumour cell morphology by expert human observers for prognostic purposes is subjective and potentially a tedious process. In this paper, we propose an automated and objective method for tumour cell analysis in whole slide images (WSI) of lung adenocarcinoma. Tumour cells are first extracted at higher magnification and then morphological, texture and spatial distribution features are computed for each cell. We investigated the biological impact of the nuclear features in the context of tumour grading. Results show that some of these features are correlated with tumour grade. We examine some of these features on the WSI where these features shows different distribution depends on the tumour grade.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel automated strategy for classification of HEp-2 specimens as Mitotic Spindle (MS) or non-Mitotic Spindle (non-MS), which is important for CAD-based Anti-Nuclear Antibody (ANA) detection, in diagnosis of autoimmune disorders. Our strategy is based on the observation that few MS type cells are present in the image along with some other pattern cells in a MS labeled HEp-2 specimen. Hence, the commonly followed majority rule in classification of non-MS cells cannot be applied in this case. We propose that the decision for classifying a specimen as MS or non-MS is based on a pre-defined threshold value on the number of detected MS cells in a specimen. In literature, such evaluation criteria is not clearly analyzed. We note that the MS cells have a distinct visual characteristic, which enables us to use simplistic features representation using the fusion of Gabor and LM filter banks, followed by the Bag-of-words framework and Support Vector Machine (SVM) classification. The experimental results are shown using I3A contest HEp-2 specimen dataset. We achieve 100% True-positive, 5.55% False-positive and 0.97 F-score at the best threshold value of MS. The novel and clearly defined decision strategy makes our approach a good alternative for detection of MS specimen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background: Observing the spatial pattern of tumour infiltrating lymphocytes in follicular lymphoma can lead to the development of promising novel biomarkers for survival prognosis. We have developed the “Hypothesised Interactions Distribution” (HID) analysis, to quantify the spatial heterogeneity of cell type interactions between lymphocytes in the tumour microenvironment. HID features were extracted to train a machine learning model for survival prediction and their performance was compared to other architectural biomarkers. Scalability of the method was examined by observing interactions between cell types that were identified using 6-plexed immunofluorescent staining. Methods: Two follicular lymphoma datasets were used in this study; a microarray with tissue cores from patients, stained with CD69, CD3 and FOXP3 using multiplexed brightfield immunohistochemistry and a second tissue microarray, stained with PD1, PDL1, CD4, FOXP3, CD68 and CD8 using immunofluorescence. Spectral deconvolution, nuclei segmentation and cell type classification was carried out, followed by extraction of features based on cell type interaction probabilities. Random Forest classifiers were built to assign patients into groups of different overall survival and the performance of HID features was assessed. Results: HID features constructed over a range of interaction distances were found to significantly predict overall survival in both datasets (p = 0.0363, p = 0.0077). Interactions of specific phenotype pairs, correlated with unfavourable prognosis, could be identified, such as the interactions between CD3+FOXP3+ cells and CD3+CD69+ cells. Conclusion: Further validation of HID demonstrates its potential for development of clinical biomarkers in follicular lymphoma.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Identification of bladder layers from tissue biopsies is the first step towards an accurate diagnosis and prognosis of bladder cancer. We present an automated Bladder Image Analysis System (BLIAS) that can recognize urothelium, lamina propria, and muscularis propria from images of H and E-stained slides of bladder biopsies. Furthermore, we present its clinical application to automate risk stratification of T1 bladder cancer patients based on the depth of lamina propria invasion. The method uses multidimensional scaling and transfer learning in conjunction with convolutional neural networks to identify different bladder layers from H and E images of bladder biopsies. The method was trained and tested on eighty whole slide images of bladder cancer biopsies. Our preliminary findings suggest that the proposed method has good agreement with the pathologist in identification of different bladder layers. Additionally, given a set of tumor nuclei within lamina propria, it has the potential to risk stratify T1 bladder cancer by computing the distance from this set to urothelium and muscularis propria. Our results suggest that a pretrained network trained via transfer learning is better in identifying bladder layers than a conventional deep learning paradigm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced image analysis can lead to automated examination to histopatholgy images which is essential for ob- jective and fast cancer diagnosis. Recently deep learning methods, in particular Convolutional Neural Networks (CNNs), have shown exceptionally successful performance on medical image analysis as well as computational histopathology. Because Whole-Slide Images (WSIs) have a very large size, the CNN models are commonly applied to classify WSIs per patch. Although a CNN is trained on a large part of the input space, the spatial dependencies between patches are ignored and the inference is performed only on appearance of the individual patches. Therefore, prediction on the neighboring regions can be inconsistent. In this paper, we apply Con- ditional Random Fields (CRFs) over latent spaces of a trained deep CNN in order to jointly assign labels to the patches. In our approach, extracted compact features from intermediate layers of a CNN are considered as observations in a fully-connected CRF model. This leads to performing inference on a wider context rather than appearance of individual patches. Experiments show an improvement of approximately 3.9% on average FROC score for tumorous region detection in histopathology WSIs. Our proposed model, trained on the Camelyon171 ISBI challenge dataset, won the 2nd place with a kappa score of 0.8759 in patient-level pathologic lymph node classification for breast cancer detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiplex brightfield immunohistochemistry (IHC) offers the potential advantage to simultaneously analyze multiple biomarkers in order to, for example, determine T-cell numbers and phenotypes in a patient’s immune response to cancer. This paper presents a fully automatic image-analysis framework to utilize multiplex assays to identify and count stained cells of interest; it was validated by comparison with multiple “gold standard” 3,3'-Diaminobenzidine (DAB) singleplex assays. Both multiplex and singleplex assays were digitized using an RGB slide scanner. The proposed image-analysis algorithms consist of 1) a novel color-deconvolution method, 2) cell candidate detection, 3) feature extraction, and 4) cell classification based on supervised machine learning. Fully automated cell counts on the singleplex images were first rigorously verified by comparing to experts’ ground truth counts: A total of 72,076 for CD3-, 34,133 for CD8-, and 2,615 for FoxP3-positive T-cells were used in this singleplex algorithm validation. Concordance correlation coefficients (CCC) of the singleplex algorithm-to-observer agreements were 0.945, 0.965, and 0.997, respectively. Then, the singleplex slides were registered to the adjacent multiplex slides and the automated cell counts for each were compared. For this validation of the multiplex assay cell counts, the CCC values were 0.914, 0.943, and 0.877 for 12,828, 2,545, and 1,647 cells, respectively; we observed good slide-to-slide agreement between multiplex and singleplex. We conclude that the proposed fully-automated image analysis can be a useful and reliable tool to assess multiplex IHC assays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For histochemical staining, to highlight multiple biomarkers within a sample, multiple stains with different light spectral absorption characteristics are deployed (i.e. multiplexing). To reconstruct the single stain contrast from a multiplexed sample, the conventional color deconvolution method assumes that light extinction follows Lambert-Beer’s law during imaging process and the optical density (OD) measured from the image is linearly related to the stain amount. However, this assumption does not hold well for commonly used diaminobenzidine (DAB) stain due to its precipitate-forming reaction during sample processing. Besides absorption, scattering also contributes to the light extinction process which causes the non-linear relation between the OD value and the stain amount. Therefore, using the conventional method may not have sufficient accuracy for quantified stain analysis, especially when DAB presents at high concentration levels. In this paper, our study shows that DAB presents different chromatic properties at different concentration levels. Therefore, we propose a new color deconvolution method to address the issue by employing a set of reference colors vectors, each of which characterizes a DAB concentration level. Then, the reference color vector that best represents the true DAB concentration level in the mixture is automatically selected for color deconvolution. Both visual and quantified assessments are provided to show that the method enables detection for a broader dynamic range of DAB concentration and therefore should be preferred by the user for bright field image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessment of immunohistochemically stained slides is often a crucial diagnostic step in clinical practice. However, as this assessment is generally performed visually by pathologists it can suffer from significant inter-observer variability. The introduction of whole slide scanners facilitates automated analysis of immunohistochemical slides. Color deconvolution (CD) is one of the most popular first steps in quantifying stain density in histopathological images. However, color deconvolution requires stain color vectors for accurate unmixing. Often it is assumed that these stain vectors are static. In practice, however, they are influenced by many factors. This can cause inferior CD unmixing and thus typically results in poor quantification. Some automated methods exist for color stain vector estimation, but most depend on a significant amount of each stain to be present in the whole slide images. In this paper we propose a method for automatically finding stain color vectors and unmixing IHC stained whole slide images, even when some stains are sparsely expressed. We collected 16 tonsil slides and stained them for different periods of time with hematoxylin and a DAB-colored proliferation marker Ki67. RGB pixels of WSI images were converted to the hue saturation density (HSD) color domain and subsequently K-means clustering was used to separate stains and calculate the stain color vectors for each slide. Our results show that staining time affects the stain vectors and that calculating a unique stain vector for each slide results in better unmixing results than using a standard stain vector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-small cell lung cancer (NSCLC) is the leading cause of cancer related deaths worldwide. The treatment of choice for early stage NSCLC is surgical resection followed by adjuvant chemotherapy for high risk patients. Currently, the decision to offer chemotherapy is primarily dependent on several clinical and visual radiographic factors as there is a lack of a biomarker which can accurately stratify and predict disease risk in these patients. Computer extracted image features from CT scans (radiomic) and (pathomic) from H&E tissue slides have already shown promising results in predicting recurrence free survival (RFS) in lung cancer patients. This paper presents new radiology-pathology fusion approach (RaPtomics) to combine radiomic and pathomic features for predicting recurrence in early stage NSCLC. Radiomic textural features (Gabor, Haralick, Law, Laplace and CoLlAGe) from within and outside lung nodules on CT scans and intranuclear pathology features (Shape, Cell Cluster Graph and Global Graph Features) were extracted from digitized whole slide H&E tissue images on an initial discovery set of 50 patients. The top most predictive radiomic and pathomic features were then combined and in conjunction with machine learning algorithms were used to predict classifier. The performance of the RaPtomic classifier was evaluated on a training set from the Cleveland Clinic (n=50) and independently validated on images from the publicly available cancer genome atlas (TCGA) dataset (n=43). The RaPtomic prognostic model using Linear Discriminant Analysis (LDA) classifier, in conjunction with two radiomic and two pathomic shape features, significantly predicted 5-year recurrence free survival (RFS) (AUC 0.78; p<0.005) as compared to radiomic (AUC 0.74; p<0.01) and pathomic (AUC 0.67; p<0.05) features alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Martin Halicek, James V. Little, Xu Wang, Zhuo Georgia Chen, Mihir Patel, Christopher C. Griffith, Mark W. El-Deiry, Nabil F. Saba, Amy Y. Chen, et al.
Hyperspectral imaging (HSI), a non-contact optical imaging technique, has been recently used along with machine learning technique to provide diagnostic information about ex-vivo surgical specimens for optical biopsy. The computer-aided diagnostic approach requires accurate ground truths for both training and validation. This study details a processing pipeline for registering the cancer-normal margin from a digitized histological image to the gross-level HSI of a tissue specimen. Our work incorporates an initial affine and control-point registration followed by a deformable Demons-based registration of the moving mask obtained from the histological image to the fixed mask made from the HS image. To assess registration quality, Dice similarity coefficient (DSC) measures the image overlap, visual inspection is used to evaluate the margin, and average target registration error (TRE) of needle-bored holes measures the registration error between the histologic and HSI images. Excised tissue samples from seventeen patients, 11 head and neck squamous cell carcinoma (HNSCCa) and 6 thyroid carcinoma, were registered according to the proposed method. Three registered specimens are illustrated in this paper, which demonstrate the efficacy of the registration workflow. Further work is required to apply the technique to more patient data and investigate the ability of this procedure to produce suitable gold standards for machine learning validation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neoadjuvant therapy (NAT) is an option for locally advanced breast cancer patients to downsize tumour allowing for less extensive surgical operation, better cosmetic outcomes, and lesser post-operative complications. The quality of NAT is assessed by pathologists after examining the tissue sections to reveal the efficacy of treatment, and also associate the outcome with the patient's prognosis. There are many factors involved with assessing the best treatment efficacy, including the amount of residual cancer within tumour bed. Currently, the process of assessing residual tumour burden is qualitative, which may be time-consuming and impaired by inter-observer variability. In this study, an automated method was developed to localize, and subsequently classify nuclei figures into three categories of lymphocyte (L), benign epithelial (BE), and malignant epithelial (ME) figures from post-NAT tissue slides of breast cancer. A fully convolutional network (FCN) was developed to perform both tasks in an efficient way. In order to find the cell nuclei in image patches (localization), the FCN was applied over the entire patch, generating four heatmaps corresponding to the probability of a pixel being the centre of an L, BE, ME, or non-cell nuclei. Non-maximum suppression algorithm was subsequently applied to the generated heatmaps to estimate the nuclei locations. Finally, the highest probability corresponding to each predicted cell nucleus in the heatmaps was used for the classification of the nucleus to one of the three classes (L, BE, or ME). The final classification accuracy on detected nuclei was 94.6%, surpassing previous machine learning methods based on handcrafted features on this dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic analysis of digital pathology images is becoming of increasing interest for the development of novel therapeutic drugs and of the associated companion diagnostic tests in oncology. A precise quantification of the tumor microenvironment and therefore an accurate segmentation of the tumor extent are critical in this context. In this paper, we present a new approach based on visual context random forest to generate precise segmentation maps from deep learning coarse segmentation maps. Applied to the detection of cytokeratin positive (CK) epithelium regions in immunofluorescence (IF) images, we show that this method enables an accurate and fast detection of detailled structures in terms of qualitative and quantitative evaluation against three baseline approaches. For the method to be resilient to the high variability of staining intensity, a novel normalization algorithm for IF images is moreover introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic localization of cancer on whole-slide histology images from radical prostatectomy specimens would support quantitative, graphical pathology reporting and research studies validating in vivo imaging against gold-standard histopathology. There is an unmet need for such a system that is robust to staining variability, is sufficiently fast and parallelizable as to be integrated into the clinical pathology workflow, and is validated using whole-slide images. We developed and validated such a system, with tuning occurring on an 8-patient data set and cross-validation occurring on a separate 41-patient data set comprising 703,745 480μm × 480μm sub-images from 166 whole-slide images. Our system computes tissue component maps from pixel data using a technique that is robust to staining variability, showing the loci of nuclei, luminal areas, and areas containing other tissue including stroma. Our system then computes first- and second-order texture features from the tissue component maps and uses machine learning techniques to classify each sub-image on the slide as cancer or non-cancer. The system was validated against expert-drawn contours that were verified by a genitourinary pathologist. We used leave-one-patient-out, 5-fold, and 2-fold cross-validation to measure performance with three different classifiers. The best performing support vector machine classifier yielded an area under the receiver operating characteristic curve of 0.95 from leave-one-out cross-validation. The system demonstrated potential for practically useful computation speeds, with further optimization and parallelization of the implementation. Upon successful multi-centre validation, this system has the potential to enable quantitative surgical pathology reporting and accelerate imaging validation studies using histopathologic reference standards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic detection of lymphocytes could contribute to develop objective measures of the infiltration grade of tumors, which can be used by pathologists for improving the decision making and treatment planning processes. In this article, a simple framework to automatically detect lymphocytes on lung cancer images is presented. This approach starts by automatically segmenting nuclei using a watershed-based approach. Nuclei shape, texture, and color features are then used to classify each candidate nucleus as either lymphocyte or non-lymphocyte by a trained SVM classifier. Validation was carried out using a dataset containing 3420 annotated structures (lymphocytes and non-lymphocytes) from 13 1000 × 1000 fields of view extracted from lung cancer whole slide images. A Deep Learning model was trained as a baseline. Results show an F-score 30% higher with the presented framework than with the Deep Learning approach. The presented strategy is, in addition, more flexible, requires less computational power, and requires much lower training times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate cancer is generally graded by pathologists based on hematoxylin and eosin (H and E) stained slides. Because of the large size of the tumor areas in radical prostatectomies (RP), this task can be tedious and error prone with known high interobserver variability. Recent advancements in deep learning have enabled development of automated systems that may assist pathologists in prostate diagnostics. As prostate cancer originates from glandular tissue, an important prerequisite for development of such algorithms is the possibility to automatically differentiate between glandular tissue and other tissues. In this paper, we propose a method for automatically segmenting epithelial tissue in digitally scanned prostatectomy slides based on deep learning. We collected 30 single-center whole mount tissue sections, with reported Gleason growth patterns ranging from 3 to 5, from 27 patients that underwent RP. Two different network architectures, U-Net and regular fully convolutional networks with varying depths, were trained using a set of sparsely annotated slides. We evaluated the trained networks on exhaustively annotated regions from a separate test set. The test set contained both healthy and cancerous epithelium with different Gleason growth patterns. The results show the effectiveness of our approach given a pixel-based AUC score of 0.97. Our method contains no prior assumptions on glandular morphology, does not directly rely on the presence of lumina, and all features are learned by the network itself. The generated segmentation can be used to highlight regions of interest for pathologists and to improve cancer annotations to further enhance an automatic cancer grading system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Patients diagnosed with early stage (Stage I/II) Oral Cavity Cancer (OCC) are typically treated with surgery alone. Unfortunately, 25-37% of early stage OCC patients experience loco-regional tumor recurrence after receiving surgery. Currently, pathologists use the Histologic Risk Model (HRM), a clinically validated risk assessment tool to determine patient prognosis. In this study, we perform image registration on two cases of serially sectioned blocks of Hematoxylin and Eosin (H and E) stained OCC tissue sections. The goal of this work is to create an optimized registration procedure to reconstruct 3D tissue models, which can provide a pathologist with a realistic representation of the tissue architecture before surgical resection. Our project aims to extend the HRM to enhance prediction performance for patients at high risk of disease progression using computational pathology tools. In previous literature, others have explored image registration of histological slides and reconstructing 3D models with similar processes used. Our work is unique in that we are investigating in-depth the parameter space of an image registration algorithm to establish a registration procedure for any serial histological section. Each parameter set was sequentially perturbed to determine the best parameter set for registration, as evaluated through mutual information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The residual cancer burden index is a powerful prognostic factor which is used to measure neoadjuvant therapy response in invasive breast cancers. Tumor cellularity is one component of the residual cancer burden index and is currently measured manually through eyeballing. As such it is subject to inter- and intra-variability and is currently restricted to discrete values. We propose a method for automatically determining tumor cellularity in digital slides using deep learning techniques. We train a series of ResNet architectures to output both discrete and continuous values and compare our outcomes with scores acquired manually by an expert pathologist. Our configurations were validated on a dataset of image patches extracted from digital slides, each containing various degrees of tumor cellularity. Results showed that, in the case of discrete values, our models were able to distinguish between regions-of-interest containing tumor and healthy cells with over 97% test accuracy rates. Overall, we achieved 76% accuracy over four predefined tumor cellularity classes (no tumor/tumor; low, medium and high tumor cellularity). When computing tumor cellularity scores on a continuous scale, ResNet showed good correlations with manually-identified scores, showing potential for computing reproducible scores consistent with expert opinion using deep learning techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sean A. K. Pentinga, Keith Kwan, Sarah A. Mattonen, Carol Johnson, Alexander Louie, Mark Landis, Richard Inculet, Richard Malthaner, Dalilah Fortin, et al.
Stereotactic ablative radiotherapy (SABR) delivers high-dose-per-fraction radiotherapy to tumours and spares surrounding tissue. It is effective for early-stage non-small cell lung cancer. However, SABR causes radiationinduced lung injuries that mimic recurring cancer, confounding detection of recurrences and early salvage therapy. We have previously developed radiomics-based recurrence detection. However, our radiomics system needs to be validated against histologic markers of viable tumour post-SABR. In this paper, our goals were to develop semiautomatic (1) 2D reconstruction of pseudo whole-mount (PWM) tissue sections from scanned slides, (2) 3D reconstruction and registration of PWM sections to pre-surgery computed tomography (CT), and (3) quantitative registration error measurement. Lobectomy tissue sections on standard 1” × 3” slides were obtained from patients who underwent SABR. Our graphical user interface allows interactive stitching of the sections into PWMs. Using our developed 3D Slicer-based thin-plate spline warping tool, we performed 3D PWM reconstruction and registered them to CT via correspondence of homologous intrinsic landmarks. The target registration error for 229 fiducial pairs defining vessels and airways was calculated for 56 PWMs reconstructed from 9 patients. We measured a mean of 7.33 mm, standard deviation of 4.59 mm and root mean square of 8.65 mm. This proof-of-principle study demonstrates for the first time that it is feasible to register in vivo human lung CT images with histology, with no modifications to the clinical pathology workflow other than videography to document gross dissection. Ongoing work to automate this process will yield a tool for histologic lung imaging and radiomics validation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cervical cancer is the second most common cause of death among women worldwide, but it can be treated if detected early. However, due to inter and intra observer variability in manual screening, automating the process is need of the hour. For classifying the cervical cells as normal vs abnormal, segmentation of nuclei as well as cytoplasm is a prerequisite. But the segmentation of nuclei is relatively more reliable and equally efficient for classification to that of cytoplasm. Hence, this paper proposes a new approach for segmentation of nuclei based on selective pre-processing and then passing the image patches to respective deep CNN (trained with/without pre-processed images) for pixel-wise 3 class labelling as nucleus, edge or background. We argue and demonstrate that a single pre-processing approach may not suit all images, as there are significant variations in nucleus sizes and chromatin patterns. The selective pre-processing is carried out to effectively address this issue. This also enables the deep CNNs to be better trained in spite of relatively less data, and thus better exploit the capability of CNN of good quality segmentation. The results show that the approach is effective for segmentation of nuclei in PAP-smears with an F-score of 0.90 on Herlev dataset as opposed to the without selective pre-processing F-scores of 0.78 (without pre-processing) and 0.82 (with pre-processing). The results also show the importance of considering 3 classes in CNN instead of 2 (nucleus and background) where the latter achieves an F-score as low as 0.63.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Grading whole slide images (WSIs) from patient tissue samples is an important task in digital pathology, particularly for diagnosis and treatment planning. However, this visual inspection task, performed by pathologists, is inherently subjective and has limited reproducibility. Moreover, grading of WSIs is time consuming and expensive. Designing a robust and automatic solution for quantitative decision support can improve the objectivity and reproducibility of this task. This paper presents a fully automatic pipeline for tumor proliferation assessment based on mitosis counting. The approach consists of three steps: i) region of interest selection based on tumor color characteristics, ii) mitosis counting using a deep network based detector, and iii) grade prediction from ROI mitosis counts. The full strategy was submitted and evaluated during the Tumor Proliferation Assessment Challenge (TUPAC) 2016. TUPAC is the first digital pathology challenge grading whole slide images, thus mimicking more closely a real case scenario. The pipeline is extremely fast and obtained the 2nd place for the tumor proliferation assessment task and the 3rd place in the mitosis counting task, among 17 participants. The performance of this fully automatic method is similar to the performance of pathologists and this shows the high quality of automatic solutions for decision support.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The number of mitotic figures per tumor area observed in hematoxylin and eosin (H and E) histological tissue sections under light microscopy is an important biomarker for breast cancer prognosis. Whole-slide imaging and computational pathology have enabled the development of automatic mitosis detection algorithms based on convolutional neural networks (CNNs). These models can suffer from high generalization error, i.e. trained networks often underperform on datasets originating from pathology laboratories different than the one that provided the training data, mainly due to the presence of inter-laboratory stain variations. We propose a novel data augmentation strategy that exploits the properties of the H and E color space to simulate a broad range of realistic H and E stain variations. To our best knowledge, this is the first time that data augmentation is performed directly in the H and E color space, instead of RGB. The proposed technique uses color deconvolution to transform RGB images into the H and E color space, modifies the H and E color channels stochastically, and projects them back to RGB space. We trained a CNN-based mitosis detector on homogeneous data from a single institution, and tested its performance on an external, multicenter cohort that contained a wide range of unseen H and E stain variations. We compared CNNs trained with and without the proposed augmentation strategy and observed a significant improvement in performance and robustness to unseen stain variations when the new color augmentation technique was included. In essence, we have shown that CNNs can be made robust to inter-lab stain variation by incorporating extensive stain augmentation techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The feasibility of localizing, segmenting, and classifying individual cells in multi-channel confocal microscopy images is highly dependent on image quality. In certain applications, with good image quality, the segmentation can be trivial and accomplished via thresholding, watershed, or a collection of other well-established and studied heuristics; however, at the limit of poor image quality and complex image features, these techniques fail. It is at this limit that deep convolutional neural network (DCNN) approaches excel. Our research studies the interaction of individual immune cells and their shape changes relative to inflammatory immune reactions1 using multi-channel immunofluorescence imaging of renal biopsies from patients with inflammatory kidney disease. We present here a deep learning methodology for application to nuclear and cell membrane immunofluorescent stains to automatically segment and classify multiple T-cell and dendritic cell types. With both T-cells and dendritic cells segmented, we are able to study how T-cells with different surface antigens change shape with proximity to dendritic cells. Shape changes are seen when T-cells move close to dendritic cells and interact. We use a sliding window, max-filtering DCNN to segment and classify 3 cell types from 6 image stains channels within a single DCNN. This DCNN maintains images at original resolution throughout the network using dilated convolutions and max-filtering in place of max pooling layers. In addition, we use 3D convolution kernels with two spatial dimensions and one channel dimension. This allows us to output a multi-class binary classification based on multichannel data at the original image resolution. We trained and validated the network across 24 patients with 8,572 segmented cells. Our results demonstrate a mean Dice-Sorensen score of 0.78 +/- 0.18, a mean classification sensitivity of 0.76, and a mean classification specificity of 0.75 across all 3 segmented cell types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel method for unsupervised segmentation of pathology images. Staging of lung cancer is a major factor of prognosis. Measuring the maximum dimensions of the invasive component in a pathology images is an essential task. Therefore, image segmentation methods for visualizing the extent of invasive and noninvasive components on pathology images could support pathological examination. However, it is challenging for most of the recent segmentation methods that rely on supervised learning to cope with unlabeled pathology images. In this paper, we propose a unified approach to unsupervised representation learning and clustering for pathology image segmentation. Our method consists of two phases. In the first phase, we learn feature representations of training patches from a target image using the spherical k-means. The purpose of this phase is to obtain cluster centroids which could be used as filters for feature extraction. In the second phase, we apply conventional k-means to the representations extracted by the centroids and then project cluster labels to the target images. We evaluated our methods on pathology images of lung cancer specimen. Our experiments showed that the proposed method outperforms traditional k-means segmentation and the multithreshold Otsu method both quantitatively and qualitatively with an improved normalized mutual information (NMI) score of 0.626 compared to 0.168 and 0.167, respectively. Furthermore, we found that the centroids can be applied to the segmentation of other slices from the same sample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnoses in kidney disease often depend on quantification and presence of specific structures in the tissue. The progress in the field of whole-slide imaging and deep learning has opened up new possibilities for automatic analysis of histopathological slides. An initial step for renal tissue assessment is the differentiation and segmentation of relevant tissue structures in kidney specimens. We propose a method for segmentation of renal tissue using convolutional neural networks. Nine structures found in (pathological) renal tissue are included in the segmentation task: glomeruli, proximal tubuli, distal tubuli, arterioles, capillaries, sclerotic glomeruli, atrophic tubuli, in ammatory infiltrate and fibrotic tissue. Fifteen whole slide images of normal cortex originating from tumor nephrectomies were collected at the Radboud University Medical Center, Nijmegen, The Netherlands. The nine classes were sparsely annotated by a PhD student, experienced in the field of renal histopathology (MH). Experiments were performed with three different network architectures: a fully convolutional network, a multi-scale fully convolutional network and a U-net. We assessed the added benefit of combining the networks into an ensemble. We performed four-fold cross validation and report the average pixel accuracy per annotation for each class. Results show that convolutional neural net- works are able to accurately perform segmentation tasks in renal tissue, with accuracies of 90% for most classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Whole slide images (WSIs) can greatly improve the workflow of pathologists through the development of software for automatic detection and analysis of cellular and morphological features. However, the gigabyte size of a WSI poses serious challenge for scalable storage and fast retrieval, which is essential for next-generation image analytics. In this paper, we propose a system for scalable storage of WSIs and fast retrieval of image tiles using Apache Spark, a space-filling curve, and popular data storage formats. We investigate two schemes for storing the tiles of WSIs. In the first scheme, all the WSIs were stored in a single table (partitioned by certain table attributes for fast retrieval). In the second scheme, each WSI is stored in a separate table. The records in each table are sorted using the index values assigned by the space-filling curve. We also study two data storage formats for storing WSIs: Parquet and ORC (Optimized Row Columnar). Through performance evaluation on a 16-node cluster in CloudLab, we observed that ORC enables faster retrieval of tiles than Parquet and requires 6 times less storage space. We also observed that the two schemes for storing WSIs achieved comparable performance. On an average, our system took 2 secs to retrieve a single tile and less than 6 seconds for 8 tiles on up to 80 WSIs. We also report the tile retrieval performance of our system on Microsoft Azure to gain insight on how the underlying computing platform can affect the performance of our system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a rapid, scalable, and high throughput computational pipeline to accurately detect and segment the glomerulus from renal histopathology images with high precision and accuracy. Our proposed method integrates information from fluorescence and bright-field microscopy imaging of renal tissues. For computation, we exploit the simplicity, yet extreme robustness of Butterworth bandpass filter to extract the glomeruli by utilizing the information inherent in the renal tissue stained with immunofluorescence marker sensitive at blue emission wavelength as well as tissue auto-fluorescence. The resulting output is in-turn used to detect and segment multiple glomeruli within the fieldof-view in the same tissue section post-stained with histopathological stains. Our approach, optimized over 40 images, produced a sensitivity/specificity of 0.95/0.84 on n = 66 test images, each containing one or more glomeruli. The work not only has implications in renal histopathology involving diseases with glomerular structural damages, which is vital to track the progression of the disease, but also aids in the development of a tool to rapidly generate a database of glomeruli from whole slide images, essential for training neural networks. The current practice to detect glomerular structural damage is by the manual examination of biopsied renal tissues, which is laborious, time intensive and tedious. Existing automated pipelines employ complex neural networks which are computationally extensive, demand expensive highperformance hardware and require large expert-annotated datasets for training. Our automated method to detect glomerular boundary will aid in rapid extraction of glomerular compartmental features from large renal histopathological images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep convolutional neural networks (CNNs) based transfer learning is an effective tool to reduce the dependence on hand-crafted features for handling medical classification problems, which may mitigate the problem of the insufficient training caused by the limited sample size. In this study, we investigated the discrimination power of the features at different CNN levels for the task of classifying epithelial and stromal regions on digitized pathologic slides which are prepared from breast cancer tissue. We extracted the low level and high level features from four different deep CNN architectures namely, AlexNet, Places365-AlexNet, VGG, and GoogLeNet. These features are used as input to train and optimize different classifiers including support vector machine (SVM), random forest (RF), and k-nearest neighborhood (KNN). A number of 15000 regions of interest (ROIs) acquired from the public database are employed to conduct this study. The result was observed that the low-level features of AlexNet, Places365-AlexNet and VGG outperformed the high-level ones, but the situation is in the opposite direction when the GoogLeNet is applied. Moreover, the best accuracy was achieved as 89.7% by the relatively deep layer of max pool 4 of GoogLeNet. In summary, our extensive empirical evaluation may suggest that it is viable to extend the use of transfer learning to the development of high-performance detection and diagnosis systems for medical imaging tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Innovative approaches in tissue imaging in an in vivo setting have included the use of optical coherence tomography (OCT) as a substrate for providing high resolution images at depths approaching 1.5 mm. This technology has offered the possibility of analyzing many tissues that are presently only evaluated using histologic methods after excision or biopsy. Despite the relatively high penetration depths of OCT, it is unclear whether the images acquired approximately 0.5 mm beyond the tissue surface maintain sufficient resolution and signal-to-noise ratio to provide useful information. Furthermore, there are relatively few studies that evaluate whether advanced image processing can be harnessed to improve the effective depth capabilities of OCT in tissue. We tested a tissue phantom designed to mimic the prostate as a model system, and independently modulated its refractive index and transmittance. Using dynamic focusing, and with the aid of an image analysis paradigm designed to improve signal detection in a model of tissue, we tested potential improvements in the ability to resolve structures at increasing penetration depths. We found that co-registered signal averaging and wavelet denoising improved overall image quality. B-spline interpolation made it possible to integrate dynamic focus images in a way that improved the effective penetration depth without significant loss in overall image quality. These results support the notion that image processing can refine OCT images for improved diagnostic capabilities to support in vivo microscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this study is evaluating registration accuracy of evaluation environment of Digital and Analog Pathology (eeDAP). eeDAP was developed to help conduct studies in which pathologists view and evaluate the same fields of view (FOVs), cells, or features in a glass slide on a microscope and in a whole slide image (WSI) on a digital display by registering the two domains. Registration happens at the beginning of a study (global registration) and during a study (localregistration). The global registration is interactive and defines the correspondence between the WSI and stage coordinates. The local registration ensures the pathologist evaluates the correct FOVs, cells, and features. All registrations are based on image-based normalized cross correlation.This study evaluates the registration accuracy achieved throughout a study. To measure the registration accuracy, we used an eyepiece ruler reticle to measure the shift distance between the center of the eyepiece and a target feature expected in the center. Two readers independently registered 60 FOVs from 6 glass slides, which covered different tissue types, stains, and magnifications. The results show thatwhen the camera image is in focus, the registration was within 5micrometers in more than 95% of the FOVs. The tissue type, stain, magnification, or readerdid not appear to impact local registration accuracy. The registration error was mainly dependent on the microscope being in focus, the scan quality, and the FOVcontent (unique high-contrast structures are better than content that is homogeneous or low contrast).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are two main types of lung cancer: small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC), which are grouped accordingly due to similarity in behaviour and response to treatment. The main types of NSCLC are lung adenocarcinoma (LUAD), which accounts for about 40% of all lung cancers and lung squamous cell carcinoma (LUSC), which accounts for about 25-30% of all lung cancers. Due to their differences, automated classification of these two main subtypes of NSCLC is a critical step in developing a computer aided diagnostic system. We present an automated method for NSCLC classification, that consists of a two-part approach. Firstly, we implement a deep learning framework to classify input patches as LUAD, LUSC or non-diagnostic (ND). Next, we extract a collection of statistical and morphological measurements from the labeled whole-slide image (WSI) and use a random forest regression model to classify each WSI as lung adenocarcinoma or lung squamous cell carcinoma. This task is part of the Computational Precision Medicine challenge at the MICCAI 2017 conference, where we achieved the greatest classification accuracy with a score of 0.81.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The black ink (India ink) and melanin are substances often found in histopathological images of skin specimens. If present in abundant quantity, they negatively affect the outcome of automatic stain deconvolution methods. We propose an automatic black ink and melanin segmentation method based on global color threshol- ding in CIELAB color space combined with a novel region growing approach. Our technique achieved sensitivity of 87 %, specificity of 99 %, and precision of 94% for black ink detection. It segmented melanin with sensitivity of 93 %, specificity of 99 %, and precision of 84 %. When excluding certain regions of images before performing color deconvolution, we observed better approximation of the stain unmixing matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Certain pathology workflows, such as classification and grading of prostate adenocarcinoma according to the Gleason grade scheme, stand to gain speed and objectivity by incorporating contemporary digital image analysis methods. We compiled a dataset of 513 high resolution image tiles from primary prostate adenocarcinoma wherein individual glands and stroma were demarcated and graded by hand. With this unique dataset, we tested four Convolutional Neural Network architectures including FCN-8s, two SegNet variants, and multi-scale U-Net for performance in semantic segmentation of high- and low-grade tumors. In a 5-fold cross-validation experiment, the FCN-8s architecture achieved a mIOU of 0.759 and an accuracy of 0.87, while the less complex U-Net architecture achieved a mIOU of 0.738 and accuracy of 0.885. The FCN-8s architecture applied to whole slide images not used for training achieved a mIOU of 0.857 in annotated tumor foci with a multiresolution processing time averaging 11 minutes per slide. The three architectures tested on whole slides all achieved areas under the Receiver Operating Characteristic curve near 1, strongly demonstrating the suitability of semantic segmentation Convolutional Neural Networks for detecting and grading prostate cancer foci in radical prostatectomies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning methods are being widely used in medicine to aid cancer diagnosis and detection. In the area of digital pathology, prediction heat maps produced by convolutional neural networks (CNN) have already exceeded the performance of a trained pathologist with no time constraints. To train deep learning networks, large datasets of accurately labeled ground truth data are required; however, whole slide images are often on the scale of 10+ gigapixels when digitized at 40X magnification, contain multiple magnification levels, and have unstandardized formats. Due to these characteristics, traditional techniques for the production of training and validation data cannot be used, resulting in the limited availability of annotated datasets. This research presents a Python module and method to rapidly produce accurately annotated image patches from whole slide images. This module is built on OpenCV, an open source computer vision library, OpenSlide, an open source library for reading virtual slide images, and NumPy, a library for scientific computing with Python. These Python scripts successfully produce 'ground truth' image patches and will help transfer advances in research laboratories into clinical application by addressing many of the challenges associated with the development of annotated datasets for machine learning in histopathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, deep learning is widely used and has achieved excellent results in many fields except in the field of image registration, the reasons are two-fold: Firstly all the steps of deep learning should be derivable; nevertheless, the nonlinear deformation which is usually used in registration algorithms is hard to be depicted by explicit function. Secondly, success of deep learning is based on a large amount of labeled data, this is problematic for the application in real scenes. To address these concerns, we propose an unsupervised network for image registration. In order to integrate registration process into deep learning, image deformation is achieved by resampling, which can make deformation step derivable. The network optimizes its parameters directly by minimizing the loss between registered image and reference image without ground truth. To further improve algorithm's accuracy and speed, we incorporate coarse-to-fine multi-scale iterative scheme. We apply our method to register microscopic section images of neuron tissue. Compared with highly fine-tuning method sift flow, our method achieves similar accuracy with much less time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given microscope images, one can observe 2D cross-sections of 3D micro anatomical structures with high spatial resolutions. Each of the 2D microscope images alone is, though, not suitable for studying the 3D anatomical structures and hence many works have been done on a 3D image reconstruction from a given series of microscope images of histological sections obtained from a single target tissue. For the 3D image reconstruction, an image registration technique is necessary because there exists the independent translation, rotation, and non-rigid deformation of the histological sections. In this paper, a landmark-based method of fully non-rigid image registration for the 3D image reconstruction is proposed. The proposed method first detects landmarks corresponded between given images by using a template matching and then non-rigidly deforms the images so that the corresponding landmarks detected in different images are located along a single smooth curve in the reconstructed 3D image. Most of all conventional methods for the reconstruction of 3D microscope image registers two consecutive images at a time and many micro anatomical structures often have unnatural straight shape along the vertical (z) direction in the resultant 3D image because, roughly speaking, the conventional methods registers two given images so that pixels with the same coordinates in the two images have the same pixel value. The proposed method, on the other hand, determine the deformations of all given images by referring to the all images and deforms them simultaneously. In the experiments, a 3D microscope image of the pancreas of a KPC mouse was reconstructed from a series of microscope images of the histological sections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated image analysis of slides of thin blood smears can assist with early diagnosis of many diseases. Automated detection and segmentation of red blood cells (RBCs) are prerequisites for any subsequent quantitative highthroughput screening analysis since the manual characterization of the cells is a time-consuming and error-prone task. Overlapping cell regions introduce considerable challenges to detection and segmentation techniques. We propose a novel algorithm that can successfully detect and segment overlapping cells in microscopic images of stained thin blood smears. The algorithm consists of three steps. In the first step, the input image is binarized to obtain the binary mask of the image. The second step accomplishes a reliable cell center localization that utilizes adaptive meanshift clustering. We employ a novel technique to choose an appropriate bandwidth for the meanshift algorithm. In the third step, the cell segmentation purpose is fulfilled by estimating the boundary of each cell through employing a Gradient Vector Flow (GVF) driven snake algorithm. We compare the experimental results of our methodology with the state-of-the-art and evaluate the performance of the cell segmentation results with those produced manually. The method is systematically tested on a dataset acquired at the Chittagong Medical College Hospital in Bangladesh. The overall evaluation of the proposed cell segmentation method based on a one-to-one cell matching on the aforementioned dataset resulted in 98% precision, 93% recall, and 95% F1-score index.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.