PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10140, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mitotic count is helpful in determining the aggressiveness of breast cancer. In previous
studies, it was shown that the agreement among pathologists for grading mitotic index is fairly
modest, as mitoses have a large variety of appearances and they could be mistaken for other
similar objects. In this study, we determined local and contextual features that differ significantly
between easily identifiable mitoses and challenging ones. The images were obtained from the
Mitosis-Atypia 2014 challenge. In total, the dataset contained 453 mitotic figures. Two pathologists
annotated each mitotic figure. In case of disagreement, an opinion from a third pathologist was
requested. The mitoses were grouped into three categories, those recognized as “a true mitosis” by
both pathologists ,those labelled as “a true mitosis” by only one of the first two readers and also the
third pathologist, and those annotated as “probably a mitosis” by all readers or the majority of them.
After color unmixing, the mitoses were segmented from H channel. Shape-based features along
with intensity-based and textural features were extracted from H-channel, blue ratio channel and
five different color spaces. Holistic features describing each image were also considered. The
Kruskal-Wallis H test was used to identify significantly different features. Multiple comparisons
were done using the rank-based version of Tukey-Kramer test. The results indicated that there are
local and global features which differ significantly among different groups. In addition, variations
between mitoses in different groups were captured in the features from HSL and LCH color space
more than other ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms for subspace clustering (SC) are effective in terms of the accuracy but exhibit high
computational complexity. We propose algorithm for SC of (highly) similar data points drawn from
union of linear one-dimensional subspaces that are possibly dependent in the input data space. The
algorithm finds a dictionary that represents data in reproducible kernel Hilbert space (RKHS).
Afterwards, data are projected into RKHS by using empirical kernel map (EKM). Due to
dimensionality expansion effect of the EKM one-dimensional subspaces become independent in
RKHS. Segmentation into subspaces is realized by applying the max operator on projected data
which yields the computational complexity of the algorithm that is linear in number of data points.
We prove that for noise free data proposed approach yields exact clustering into subspaces. We also
prove that EKM-based projection yields less correlated data points. Due to nonlinear projection, the
proposed method can adopt to linearly nonseparable data points. We demonstrate accuracy and
computational efficiency of the proposed algorithm on synthetic dataset as well as on segmentation
of the image of unstained specimen in histopathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantum Cascade Laser (QCL) spectroscopic imaging is a novel technique with many potential applications to
histopathology. Like traditional Fourier Transform Infrared (FT-IR) imaging, QCL spectroscopic imaging derives
biochemical data coupled to the spatial information of a tissue sample, and can be used to improve the diagnostic and
prognostic value of assessment of a tissue biopsy. This technique also offers advantages over traditional FT-IR imaging,
specifically the capacity for discrete frequency and real-time imaging. In this work we present applications of QCL
spectroscopic imaging to tissue samples, including discrete frequency imaging, to compare with FT-IR and its potential
value to pathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we investigate the effect of slice sampling on 3D models of tissue architecture using serial histopathology.
We present a method for using a single fully-sectioned tissue block as pilot data, whereby we build a fully-realized 3D
model and then determine the optimal set of slices needed to reconstruct the salient features of the model objects under
biological investigation. In our work, we are interested in the 3D reconstruction of microvessel architecture in the trigone
region between the vagina and the bladder. This region serves as a potential avenue for drug delivery to treat bladder
infection. We collect and co-register 23 serial sections of CD31-stained tissue images (6 μm thick sections), from which
four microvessels are selected for analysis. To build each model, we perform semi-automatic segmentation of the
microvessels. Subsampled meshes are then created by removing slices from the stack, interpolating the missing data, and
re-constructing the mesh. We calculate the Hausdorff distance between the full and subsampled meshes to determine the
optimal sampling rate for the modeled structures. In our application, we found that a sampling rate of 50% (corresponding
to just 12 slices) was sufficient to recreate the structure of the microvessels without significant deviation from the fullyrendered
mesh. This pipeline effectively minimizes the number of histopathology slides required for 3D model
reconstruction, and can be utilized to either (1) reduce the overall costs of a project, or (2) enable additional analysis on
the intermediate slides.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Successfully detecting melanocyte cells in the skin epidermis has great significance in skin histopathology. Because of
the existence of cells with similar appearance to melanocytes in hematoxylin and eosin (HE) images of the epidermis,
detecting melanocytes becomes a challenging task. This paper proposes a novel technique for the detection of
melanocytes in HE images of the epidermis, based on the melanocyte color features, in the HSI color domain. Initially,
an effective soft morphological filter is applied to the HE images in the HSI color domain to remove noise. Then a
novel threshold-based technique is applied to distinguish the candidate melanocytes’ nuclei. Similarly, the method is
applied to find the candidate surrounding halos of the melanocytes. The candidate nuclei are associated with their
surrounding halos using the suggested logical and statistical inferences. Finally, a fuzzy inference system is proposed,
based on the HSI color information of a typical melanocyte in the epidermis, to calculate the similarity ratio of each
candidate cell to a melanocyte. As our review on the literature shows, this is the first method evaluating epidermis cells
for melanocyte similarity ratio. Experimental results on various images with different zooming factors show that the
proposed method improves the results of previous works.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiplex-brightfield immunohistochemistry (IHC) staining and quantitative measurement of multiple biomarkers can
support therapeutic targeting of carcinoma-associated fibroblasts (CAF). This paper presents an automated digitalpathology
solution to simultaneously analyze multiple biomarker expressions within a single tissue section stained with
an IHC duplex assay. Our method was verified against ground truth provided by expert pathologists. In the first stage,
the automated method quantified epithelial-carcinoma cells expressing cytokeratin (CK) using robust nucleus detection
and supervised cell-by-cell classification algorithms with a combination of nucleus and contextual features. Using
fibroblast activation protein (FAP) as biomarker for CAFs, the algorithm was trained, based on ground truth obtained
from pathologists, to automatically identify tumor-associated stroma using a supervised-generation rule. The algorithm
reported distance to nearest neighbor in the populations of tumor cells and activated-stromal fibroblasts as a wholeslide
measure of spatial relationships. A total of 45 slides from six indications (breast, pancreatic, colorectal, lung, ovarian,
and head-and-neck cancers) were included for training and verification. CK-positive cells detected by the algorithm were
verified by a pathologist with good agreement (R2=0.98) to ground-truth count. For the area occupied by FAP-positive
cells, the inter-observer agreement between two sets of ground-truth measurements was R2=0.93 whereas the algorithm
reproduced the pathologists’ areas with R2=0.96. The proposed methodology enables automated image analysis to
measure spatial relationships of cells stained in an IHC-multiplex assay. Our proof-of-concept results show an automated
algorithm can be trained to reproduce the expert assessment and provide quantitative readouts that potentially support a
cutoff determination in hypothesis testing related to CAF-targeting-therapy decisions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a deep learning approach for detecting prostate cancers. The approach consists of two steps. In the first step,
we perform tissue segmentation that identifies lumens within digitized prostate tissue specimen images. Intensity- and
texture-based image features are computed at five different scales, and a multiview boosting method is adopted to
cooperatively combine the image features from differing scales and to identify lumens. In the second step, we utilize
convolutional neural networks (CNN) to automatically extract high-level image features of lumens and to predict
cancers. The segmented lumens are rescaled to reduce computational complexity and data augmentation by scaling,
rotating, and flipping the rescaled image is applied to avoid overfitting. We evaluate the proposed method using two
tissue microarrays (TMA) – TMA1 includes 162 tissue specimens (73 Benign and 89 Cancer) and TMA2 comprises 185
tissue specimens (70 Benign and 115 Cancer). In cross-validation on TMA1, the proposed method achieved an AUC of
0.95 (CI: 0.93-0.98). Trained on TMA1 and tested on TMA2, CNN obtained an AUC of 0.95 (CI: 0.92-0.98). This
demonstrates that the proposed method can potentially improve prostate cancer pathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past decades, digital pathology has emerged as an alternative way of looking at the tissue at subcellular level. It enables multiplexed analysis of different cell types at micron level. Information about cell types can be extracted by staining sections of a tissue block using different markers. However, robust fusion of structural and functional information from different stains is necessary for reproducible multiplexed analysis. Such a fusion can be obtained via image co-registration by establishing spatial correspondences between tissue sections. Spatial correspondences can then be used to transfer various statistics about cell types between sections. However, the multi-modal nature of images and sparse distribution of interesting cell types pose several challenges for the registration of differently stained tissue sections. In this work, we propose a co-registration framework that efficiently addresses such challenges. We present a hierarchical patch-based registration of intensity normalized tissue sections. Preliminary experiments demonstrate the potential of the proposed technique for the fusion of multi-modal information from differently stained digital histopathology sections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, digital pathology (DP) has been largely improved due to the development of computer vision and
machine learning. Automated detection of high-grade prostate carcinoma (HG-PCa) is an impactful medical
use-case showing the paradigm of collaboration between DP and computer science: given a field of view (FOV)
from a whole slide image (WSI), the computer-aided system is able to determine the grade by classifying the
FOV. Various approaches have been reported based on this approach. However, there are two reasons supporting
us to conduct this work: first, there is still room for improvement in terms of detection accuracy of HG-PCa;
second, a clinical practice is more complex than the operation of simple image classification. FOV ranking is
also an essential step. E.g., in clinical practice, a pathologist usually evaluates a case based on a few FOVs from
the given WSI. Then, makes decision based on the most severe FOV. This important ranking scenario is not
yet being well discussed. In this work, we introduce an automated detection and ranking system for PCa based
on Gleason pattern discrimination. Our experiments suggested that the proposed system is able to perform
high-accuracy detection (~95:57% ± 2:1%) and excellent performance of ranking. Hence, the proposed system
has a great potential to support the daily tasks in the medical routine of clinical pathology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Malaria is one of the world’s most common and serious tropical diseases, caused by parasites of the genus plasmodia that
are transmitted by Anopheles mosquitoes. Various parts of Asia and Latin America are affected but highest malaria
incidence is found in Sub-Saharan Africa. Standard diagnosis of malaria comprises microscopic detection of parasites in
stained thick and thin blood films. As the process of slide reading under the microscope is an error-prone and tedious
issue we are developing computer-assisted microscopy systems to support detection and diagnosis of malaria.
In this paper we focus on a deep learning (DL) approach for the detection of plasmodia and the evaluation of the
proposed approach in comparison with two reference approaches. The proposed classification schemes have been
evaluated with more than 180,000 automatically detected and manually classified plasmodia candidate objects from so-called
thick smears. Automated solutions for the morphological analysis of malaria blood films could apply such a
classifier to detect plasmodia in the highly complex image data of thick smears and thereby shortening the examination
time. With such a system diagnosis of malaria infections should become a less tedious, more reliable and reproducible
and thus a more objective process. Better quality assurance, improved documentation and global data availability are
additional benefits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast carcinomas are cancers that arise from the epithelial cells of the breast, which are the cells that line the lobules and the lactiferous ducts. Breast carcinoma is the most common type of breast cancer and can be divided into different subtypes based on architectural features and growth patterns, recognized during a histopathological examination. Tumor microenvironment (TME) is the cellular environment in which tumor cells develop. Being composed of various cell types having different biological roles, TME is recognized as playing an important role in the progression of the disease. The architectural heterogeneity in breast carcinomas and the spatial interactions with TME are, to date, not well understood. Developing a spatial model of tumor architecture and spatial interactions with TME can advance our understanding of tumor heterogeneity. Furthermore, generating histological synthetic datasets can contribute to validating, and comparing analytical methods that are used in digital pathology. In this work, we propose a modeling method that applies to different breast carcinoma subtypes and TME spatial distributions based on mathematical morphology. The model is based on a few morphological parameters that give access to a large spectrum of breast tumor architectures and are able to differentiate in-situ ductal carcinomas (DCIS) and histological subtypes of invasive carcinomas such as ductal (IDC) and lobular carcinoma (ILC). In addition, a part of the parameters of the model controls the spatial distribution of TME relative to the tumor. The validation of the model has been performed by comparing morphological features between real and simulated images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neutrophil extracellular trap (NET) formation is an alternate immunologic weapon used mainly by neutrophils. Chromatin
backbones fused with proteins derived from granules are shot like projectiles onto foreign invaders. It is thought that this
mechanism is highly anti-microbial, aids in preventing bacterial dissemination, is used to break down structures several
sizes larger than neutrophils themselves, and may have several more uses yet unknown. NETs have been implied to be
involved in a wide array of systemic host immune defenses, including sepsis, autoimmune diseases, and cancer. Existing
methods used to visually quantify NETotic versus non-NETotic shapes are extremely time-consuming and subject to user
bias. These limitations are obstacles to developing NETs as prognostic biomarkers and therapeutic targets. We propose an
automated pipeline for quantitatively detecting neutrophil and NET shapes captured using a flow cytometry-imaging
system. Our method uses contrast limited adaptive histogram equalization to improve signal intensity in dimly illuminated
NETs. From the contrast improved image, fixed value thresholding is applied to convert the image to binary. Feature
extraction is performed on the resulting binary image, by calculating region properties of the resulting foreground
structures. Classification of the resulting features is performed using Support Vector Machine. Our method classifies NETs
from neutrophils without traps at 0.97/0.96 sensitivity/specificity on n = 387 images, and is 1500X faster than manual
classification, per sample. Our method can be extended to rapidly analyze whole-slide immunofluorescence tissue images
for NET classification, and has potential to streamline the quantification of NETs for patients with diseases associated with
cancer and autoimmunity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Immunohistochemical detection of FOXP3 antigen is a usable marker for detection of regulatory T lymphocytes (TR) in
formalin fixed and paraffin embedded sections of different types of tumor tissue. TR plays a major role in homeostasis
of normal immune systems where they prevent auto reactivity of the immune system towards the host. This beneficial
effect of TR is frequently “hijacked” by malignant cells where tumor-infiltrating regulatory T cells are recruited by the
malignant nuclei to inhibit the beneficial immune response of the host against the tumor cells. In the majority of human
solid tumors, an increased number of tumor-infiltrating FOXP3 positive TR is associated with worse outcome. However,
in follicular lymphoma (FL) the impact of the number and distribution of TR on the outcome still remains controversial.
In this study, we present a novel method to detect and enumerate nuclei from FOXP3 stained images of FL biopsies. The
proposed method defines a new adaptive thresholding procedure, namely the optimal adaptive thresholding (OAT)
method, which aims to minimize under-segmented and over-segmented nuclei for coarse segmentation. Next, we
integrate a parameter free elliptical arc and line segment detector (ELSD) as additional information to refine
segmentation results and to split most of the merged nuclei. Finally, we utilize a state-of-the-art super-pixel method,
Simple Linear Iterative Clustering (SLIC) to split the rest of the merged nuclei. Our dataset consists of 13 region-ofinterest
images containing 769 negative and 88 positive nuclei. Three expert pathologists evaluated the method and
reported sensitivity values in detecting negative and positive nuclei ranging from 83-100% and 90-95%, and precision
values of 98-100% and 99-100%, respectively. The proposed solution can be used to investigate the impact of FOXP3
positive nuclei on the outcome and prognosis in FL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessment of histopathological data is not only difficult due to its varying appearance, e.g. caused by staining artifacts, but also due to its sheer size: Common whole slice images feature a resolution of 6000x4000 pixels. Therefore, finding rare events in such data sets is a challenging and tedious task and developing sophisticated computerized tools is not easy, especially when no or little training data is available. In this work, we propose learning-free yet effective approach based on context sensitive patch-histograms in order to find extramedullary hematopoiesis events in Hematoxylin-Eosin-stained images. When combined with a simple nucleus detector, one can achieve performance levels in terms of sensitivity 0.7146, specificity 0.8476 and accuracy 0.8353 which are very well comparable to a recently published approach based on random forests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The glomerulus, a specialized bundle of capillaries, is the blood filtering unit of the kidney. Each human kidney contains
about 1 million glomeruli. Structural damages in the glomerular micro-compartments give rise to several renal conditions;
most severe of which is proteinuria, where excessive blood proteins flow freely to the urine. The sole way to confirm
glomerular structural damage in renal pathology is by examining histopathological or immunofluorescence stained needle
biopsies under a light microscope. However, this method is extremely tedious and time consuming, and requires manual
scoring on the number and volume of structures. Computational quantification of equivalent features promises to greatly
ease this manual burden. The largest obstacle to computational quantification of renal tissue is the ability to recognize
complex glomerular textural boundaries automatically. Here we present a computational pipeline to accurately identify
glomerular boundaries with high precision and accuracy. The computational pipeline employs an integrated approach
composed of Gabor filtering, Gaussian blurring, statistical F-testing, and distance transform, and performs significantly
better than standard Gabor based textural segmentation method. Our integrated approach provides mean accuracy/precision
of 0.89/0.97 on n = 200Hematoxylin and Eosin (HE) glomerulus images, and mean 0.88/0.94 accuracy/precision on
n = 200 Periodic Acid Schiff (PAS) glomerulus images. Respective accuracy/precision of the Gabor filter bank based
method is 0.83/0.84 for HE and 0.78/0.8 for PAS. Our method will simplify computational partitioning of glomerular
micro-compartments hidden within dense textural boundaries. Automatic quantification of glomeruli will streamline
structural analysis in clinic, and can help realize real time diagnoses and interventions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate prediction of the treatment outcome is important for cancer treatment planning. We present an approach to predict prostate cancer (PCa) recurrence after radical prostatectomy using tissue images. We used a cohort whose case vs. control (recurrent vs. non-recurrent) status had been determined using post-treatment follow up. Further, to aid the development of novel biomarkers of PCa recurrence, cases and controls were paired based on matching of other predictive clinical variables such as Gleason grade, stage, age, and race. For this cohort, tissue resection microarray with up to four cores per patient was available. The proposed approach is based on deep learning, and its novelty lies in the use of two separate convolutional neural networks (CNNs)
– one to detect individual nuclei even in the crowded areas, and the other to classify them. To detect nuclear centers in an image, the first CNN predicts distance transform of the underlying (but unknown) multi-nuclear map from the input HE image. The second CNN classifies the patches centered at nuclear centers into those belonging to cases or controls. Voting across patches extracted from image(s) of a patient yields the probability of recurrence for the patient. The proposed approach gave 0.81 AUC for a sample of 30 recurrent cases and 30 non-recurrent controls, after being trained on an independent set of 80 case-controls pairs. If validated further, such an approach might help in choosing between a combination of treatment options such as active surveillance, radical prostatectomy, radiation, and hormone therapy. It can also generalize to the prediction of treatment outcomes in other cancers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time
consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image
analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable
to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this
direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically
relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy
optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model
Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges)
are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of
segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently
reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to
apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed
only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in
renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular
structures simultaneously. Implications of the utility of our method extend to fields such as oncology,
genomics, and non-biological problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate detection and quantification of normal lung tissue in the context of Mycobacterium tuberculosis infection is of
interest from a biological perspective. The automatic detection and quantification of normal lung will allow the
biologists to focus more intensely on regions of interest within normal and infected tissues. We present a computational
framework to extract individual tissue sections from whole slide images having multiple tissue sections. It automatically
detects the background, red blood cells and handwritten digits to bring efficiency as well as accuracy in quantification of
tissue sections. For efficiency, we model our framework with logical and morphological operations as they can be
performed in linear time. We further divide these individual tissue sections into normal and infected areas using deep
neural network. The computational framework was trained on 60 whole slide images. The proposed computational
framework resulted in an overall accuracy of 99.2% when extracting individual tissue sections from 120 whole slide
images in the test dataset. The framework resulted in a relatively higher accuracy (99.7%) while classifying individual
lung sections into normal and infected areas. Our preliminary findings suggest that the proposed framework has good
agreement with biologists on how define normal and infected lung areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical,
biomedical research, and computer vision fields. Among the multiple observable features spanning multiple
scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading.
As a result it is also the mostly studied target in image computing. Large amount of research papers have
devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any
further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated
rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei,
whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining
such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a
systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more
quantitatively validated study for current and future histopathology image analysis field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background: Epidermis area is an important observation area for the diagnosis of inflammatory skin diseases
and skin cancers. Therefore, in order to develop a computer-aided diagnosis system, segmentation of the epidermis
area is usually an essential, initial step. This study presents an automated and robust method for epidermis
segmentation in whole slide histopathological images of human skin, stained with hematoxylin and eosin.
Methods: The proposed method performs epidermis segmentation based on the information about shape
and distribution of transparent regions in a slide image and information about distribution and concentration of
hematoxylin and eosin stains. It utilizes domain-specific knowledge of morphometric and biochemical properties
of skin tissue elements to segment the relevant histopathological structures in human skin.
Results: Experimental results on 88 skin histopathological images from three different sources show that
the proposed method segments the epidermis with a mean sensitivity of 87 %, a mean specificity of 95% and a
mean precision of 57%. It is robust to inter- and intra-image variations in both staining and illumination, and
makes no assumptions about the type of skin disorder. The proposed method provides a superior performance
compared to the existing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis and interpretation of histopathological samples and images is an important discipline in the diagnosis of
various diseases, especially cancer. An important factor in prognosis and treatment with the aim of a precision medicine
is the determination of so-called cancer stem cells (CSC) which are known for their resistance to chemotherapeutic
treatment and involvement in tumor recurrence. Using immunohistochemistry with CSC markers like CD13, CD133 and
others is one way to identify CSC. In our work we aim at identifying CSC presence on ubiquitous Hematoxilyn and Eosin
(HE) staining as an inexpensive tool for routine histopathology based on their distinct morphological features.
We present initial results of a new method based on color deconvolution (CD) and convolutional neural networks
(CNN). This method performs favorably (accuracy 0.936) in comparison with a state-of-the-art method based on 1DSIFT
and eigen-analysis feature sets evaluated on the same image database. We also show that accuracy of the CNN is
improved by the CD pre-processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the
outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol
performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when
evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and
reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional
neural networks are a promising approach for the automatic classification of histopathology images and can
hierarchically learn subtle visual features from the data. However, a large number of manual annotations from
pathologists are commonly required to obtain sufficient statistical generalization when training new models that
can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects
prostatectomy WSIs with high–grade Gleason score is proposed. We evaluate the performance of various deep
learning architectures training them with patches extracted from automatically generated regions–of–interest
rather than from manually segmented ones. Relevant parameters for training the deep learning model such as
size and number of patches as well as the inclusion or not of data augmentation are compared between the tested
deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available
TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with
different Gleason grades in a 2–class decision: high vs. low Gleason grade. Grades 7–8, which represent the
boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data
sets with straightforward re–training of the model to include data from multiple sources, scanners and acquisition
techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches
when training networks for big data sets and to guide the visual inspection of these images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The focus of this paper is to illustrate how computational image processing and machine learning can help address
two of the challenges of histological image analysis, namely, the cellular heterogeneity, and the imprecise labeling.
We propose an unsupervised method of generating representative image signatures based on an autoencoder
architecture which reduces the dependency on labels that tend to be imprecise and tedious to get. We have
modified and enhanced the architecture to simultaneously produce representative image features as well as
perform dictionary learning on these features to enable robust characterization of the cellular phenotypes. We
integrate the extracted features in a disease grading framework, test it in prostate tissues immunostained for
different protein visualization and show significant improvement in terms of grading accuracy compared to
alternative supervised feature-extraction methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The cancer diagnostic workflow is typically performed by highly specialized and trained pathologists, for which analysis is expensive both in terms of time and money. This work focuses on grade classification in colon cancer. The analysis is performed over 3 protein markers; namely E-cadherin, beta actin and colagenIV. In addition, we also use a virtual Hematoxylin and Eosin (HE) stain. This study involves a comparison of various ways in which we can manipulate the information over the 4 different images of the tissue samples and come up with a coherent and unified response based on the data at our disposal. Pre- trained convolutional neural networks (CNNs) is the method of choice for feature extraction. The AlexNet architecture trained on the ImageNet database is used for this purpose. We extract a 4096 dimensional
feature vector corresponding to the 6th layer in the network. Linear SVM is used to classify the data. The
information from the 4 different images pertaining to a particular tissue sample; are combined using the following techniques: soft voting, hard voting, multiplication, addition, linear combination, concatenation and multi-channel feature extraction. We observe that we obtain better results in general than when we use a linear combination of the feature representations. We use 5-fold cross validation to perform the experiments. The best results are obtained when the various features are linearly combined together resulting in a mean accuracy of 91.27%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The color reproducibility of two whole-slide imaging (WSI) devices was evaluated with biological tissue slides.
Three tissue slides (human colon, skin, and kidney) were used to test a modern and a legacy WSI devices. The
color truth of the tissue slides was obtained using a multispectral imaging system. The output WSI images were
compared with the color truth to calculate the color difference for each pixel. A psychophysical experiment was
also conducted to measure the perceptual color reproducibility (PCR) of the same slides with four subjects. The
experiment results show that the mean color differences of the modern, legacy, and monochrome WSI devices are
10.94±4.19, 22.35±8.99, and 42.74±2.96 ▵E00, while their mean PCRs are 70.35±7.64%, 23.06±14.68%, and
0.91±1.01%, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate cancer is the most diagnosed cancer in men. The diagnosis is confirmed by pathologists based on ocular inspection of prostate biopsies in order to classify them according to Gleason score. The main goal of this paper is to automate the classification using convolutional neural networks (CNNs). The introduction of CNNs has broadened the field of pattern recognition. It replaces the classical way of designing and extracting hand-made features used for classification with the substantially different strategy of letting the computer itself decide which features are of importance.
For automated prostate cancer classification into the classes: Benign, Gleason grade 3, 4 and 5 we propose a CNN with small convolutional filters that has been trained from scratch using stochastic gradient descent with momentum. The input consists of microscopic images of haematoxylin and eosin stained tissue, the output is a coarse segmentation into regions of the four different classes. The dataset used consists of 213 images, each considered to be of one class only. Using four-fold cross-validation we obtained an error rate of 7.3%, which is significantly better than previous state of the art using the same dataset. Although the dataset was rather small, good results were obtained. From this we conclude that CNN is a promising method for this problem. Future work includes obtaining a larger dataset, which potentially could diminish the error margin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study has brought together image processing, clustering and spatial pattern analysis to quantitatively analyze hematoxylin and eosin-stained (HE) tissue sections. A mixture of tumor and normal cells (intratumoral heterogeneity) as well as complex tissue architectures of most samples complicate the interpretation of their cytological profiles. To address these challenges, we develop a simple but effective methodology for quantitative analysis for HE section. We adopt comparative analyses of spatial point patterns to characterize spatial distribution of different nuclei types and complement cellular characteristics analysis. We demonstrate that tumor and normal cell regions exhibit significant differences of lymphocytes spatial distribution or lymphocyte infiltration pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pathological diagnosis of a transplanted kidney is made on Banff Classification in order to gain an accurate
understanding of the condition of the kidney. This type of diagnosis is extremely difficult and, thus, a variety of methods
for diagnosis, including diagnosis by electron microscope, are being considered at present. Quantification of the
diagnostic information derived by image processing is required for such purposes. This study proposes an automatic
extraction method for normal glomeruli for the purpose of quantifying Elastica Van Gieson(EVG)-stained pathology
specimens. In addition, we provide a report on the package of methods that we have created for the extraction of the
glomerulus in the cortex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Heterogeneity of ductal carcinoma in situ (DCIS) continues to be an important topic. Combining biomarker and
hematoxylin and eosin (HE) morphology information may provide more insights than either alone. We are working
towards a computer-based identification and description system for DCIS. As part of the system we are developing a
region of interest finder for further processing, such as identifying DCIS and other HE based measures.
The segmentation algorithm is designed to be tolerant of variability in staining and require no user interaction. To
achieve stain variation tolerance we use unsupervised learning and iteratively interrogate the image for information.
Using simple rules (e.g., “hematoxylin stains nuclei”) and iteratively assessing the resultant objects (small hematoxylin
stained objects are lymphocytes), the system builds up a knowledge base so that it is not dependent upon manual
annotations. The system starts with image resolution-based assumptions but these are replaced by knowledge gained.
The algorithm pipeline is designed to find the simplest items first (segment stains), then interesting subclasses and
objects (stroma, lymphocytes), and builds information until it is possible to segment blobs that are normal, DCIS, and
the range of benign glands. Once the blobs are found, features can be obtained and DCIS detected. In this work we
present the early segmentation results with stains where hematoxylin ranges from blue dominant to red dominant in RGB
space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a fully automatic approach to grade intermediate prostate malignancy with hematoxylin and eosin-stained whole slide images. Deep learning architectures such as convolutional neural networks have been utilized in the domain of histopathology for automated carcinoma detection and classification. However, few work show its power in discriminating intermediate Gleason patterns, due to sporadic distribution of prostate glands on stained surgical section samples. We propose optimized hematoxylin decomposition on localized images, followed by convolutional neural network to classify Gleason patterns 3+4 and 4+3 without handcrafted features or gland segmentation. Crucial glands morphology and structural relationship of nuclei are extracted twice in different color space by the multi-scale strategy to mimic pathologists’ visual examination. Our novel classification scheme evaluated on 169 whole slide images yielded a 70.41% accuracy and corresponding area under the receiver operating characteristic curve of 0.7247.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In making a pathologic diagnosis, a pathologist uses cognitive processes: perception, attention, memory, and
search (Pena and Andrade-Filho, 2009). Typically, this involves focus while panning from one region of a
slide to another, using either a microscope in a traditional workflow or software program and display in a
digital pathology workflow (DICOM Standard Committee, 2010). We theorize that during panning operation,
the pathologist receives information important to diagnosis efficiency and/or correctness. As compared to an
optical microscope, panning in a digital pathology image involves some visual artifacts due to the following:
(i) the frame rate is finite; (ii) time varying visual signals are reconstructed using imperfect zero-order hold.
Specifically, after pixel’s digital drive is changed, it takes time for a pixel to emit the expected amount of
light. Previous work suggests that 49% of navigation is conducted in low-power/overview with digital
pathology (Molin et al., 2015), but the influence of display factors has not been measured. We conducted a
reader study to establish a relationship between display frame rate, panel response time, and threshold
panning speed (above which the artifacts become noticeable). Our results suggest visual tasks that involve
tissue structure are more impacted by the simulated panning artifacts than those that only involve color (e.g.,
staining intensity estimation), and that the panning artifacts versus normalized panning speed has a peak
behavior which is surprising and may change for a diagnostic task. This is work in progress and our final
findings should be considered in designing future digital pathology systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In digital pathology diagnosis, accurate recognition and quantification of the tissue structure is an important factor for
computer-aided diagnosis. However, the classification accuracy of cytoplasm is low in Hematoxylin and eosin (HE) stained
liver pathology specimens because the RGB color values of cytoplasm are almost similar to that of fibers. In this paper,
we propose a new tissue classification method for HE stained liver pathology specimens by using hyperspectral image. At
first we select valid spectra from the image to make a clear distinction between fibers and cytoplasm, and then classify
five types of tissue based on the bag of features (BoF). The average classification accuracy for all tissues was improved
by 11% in the case of using BoF of RGB and selected spectra bands in comparison with using only RGB. In particular,
the improvement reached to 24% for fibers and 5% for cytoplasm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of
image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and
analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how
trainees can be improved to a level achieved by experts in viewing of CTC.
We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can
detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points
images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading
CTC. We performed observer studies in reading virtual pathology images and examined observer’s image interpretation
process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier
Transform (FFT).
We were able to understand the difference in gaze points between experts and trainees by use of the frequency
analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components.
In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02
second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving
eyes quickly and also looking for wide areas.
We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point
sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can
be evaluated by using gaze points data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Content-based image retrieval (CBIR) has been widely researched for medical images. In application of histo- pathological images, there are two issues that need to be carefully considered. The one is that the digital slide is stored in a spatially continuous image with a size of more than 10K x 10K pixels. The other is that the size of query image varies in a large range according to different diagnostic conditions. It is a challenging work to retrieve the eligible regions for the query image from the database that consists of whole slide images (WSIs). In this paper, we proposed a CBIR framework for the WSI database and size-scalable query images. Each WSI in the database is encoded and stored in a matrix of binary codes. When retrieving, the query image is first encoded into a set of binary codes and analyzed to pre-choose a set of regions from database using hashing method. Then a multi-binary-code-based similarity measurement based on hamming distance is designed to rank proposal regions. Finally, the top relevant regions and their locations in the WSIs along with the diagnostic information are returned to assist pathologists in diagnoses. The effectiveness of the proposed framework is evaluated in a fine-annotated WSIs database of epithelial breast tumors. The experimental results show that proposed framework is both effective and efficiency for content-based whole slide image retrieval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.