In digital pathology, deep learning approaches have been increasingly applied and shown to be effective in analyzing digitized tissue specimen images. Such approaches have, in general, chosen an arbitrary scale or resolution at which the images are analyzed for several reasons, including computational cost and complexity. However, the tissue characteristics, indicative of cancer, tend to present at differing scales. Herein, we propose a framework that enables deep convolutional neural networks to perform multiscale histological analysis of tissue specimen images in an efficient and effective manner. A deep residual neural network is shared across multiple scales, extracting high-level features. The high-level features from multiple scales are aggregated and transformed in a way that the scale information is embedded in the network. The transformed features are utilized to classify tissue images into cancer and benign. The proposed method is compared to other methodologies to combine the feature from different scales. These competing methods combine the multi-scale features via 1) concatenation 2) addition and 3) convolution. Tissue microarrays (TMAs) were employed to evaluate the proposed method and the other competing methods. Three TMAs, including 225 benign and 377 cancer tissue samples, were used as training dataset. Two TMAs with 151 benign and 252 cancer tissue samples was utilized as testing dataset. The proposed method obtained an accuracy of 0.953 and the area under the receiver operating characteristics curve (AUC) of 0.971 (95% CI: 0.955-0.987), outperforming other competing methods. This suggests that the proposed multiscale approaches via a shared neural network and scale embedding scheme, could aid in improving digital pathology analysis and cancer pathology.
Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.
A prostate computer-aided diagnosis (CAD) based on random forest to detect prostate cancer using a combination of spatial, intensity, and texture features extracted from three sequences, T2W, ADC, and B2000 images, is proposed. The random forest training considers instance-level weighting for equal treatment of small and large cancerous lesions as well as small and large prostate backgrounds. Two other approaches, based on an AutoContext pipeline intended to make better use of sequence-specific patterns, were considered. One pipeline uses random forest on individual sequences while the other uses an image filter described to produce probability map-like images. These were compared to a previously published CAD approach based on support vector machine (SVM) evaluated on the same data. The random forest, features, sampling strategy, and instance-level weighting improve prostate cancer detection performance [area under the curve (AUC) 0.93] in comparison to SVM (AUC 0.86) on the same test data. Using a simple image filtering technique as a first-stage detector to highlight likely regions of prostate cancer helps with learning stability over using a learning-based approach owing to visibility and ambiguity of annotations in each sequence.
Prostate cancer (PCa) is the second most common cause of cancer related deaths in men. Multiparametric MRI (mpMRI) is the most accurate imaging method for PCa detection; however, it requires the expertise of experienced radiologists leading to inconsistency across readers of varying experience. To increase inter-reader agreement and sensitivity, we developed a computer-aided detection (CAD) system that can automatically detect lesions on mpMRI that readers can use as a reference. We investigated a convolutional neural network based deep-learing (DCNN) architecture to find an improved solution for PCa detection on mpMRI. We adopted a network architecture from a state-of-the-art edge detector that takes an image as an input and produces an image probability map. Two-fold cross validation along with a receiver operating characteristic (ROC) analysis and free-response ROC (FROC) were used to determine our deep-learning based prostate-CAD’s (CADDL) performance. The efficacy was compared to an existing prostate CAD system that is based on hand-crafted features, which was evaluated on the same test-set. CADDL had an 86% detection rate at 20% false-positive rate while the top-down learning CAD had 80% detection rate at the same false-positive rate, which translated to 94% and 85% detection rate at 10 false-positives per patient on the FROC. A CNN based CAD is able to detect cancerous lesions on mpMRI of the prostate with results comparable to an existing prostate-CAD showing potential for further development.
We present a deep learning approach for detecting prostate cancers. The approach consists of two steps. In the first step,
we perform tissue segmentation that identifies lumens within digitized prostate tissue specimen images. Intensity- and
texture-based image features are computed at five different scales, and a multiview boosting method is adopted to
cooperatively combine the image features from differing scales and to identify lumens. In the second step, we utilize
convolutional neural networks (CNN) to automatically extract high-level image features of lumens and to predict
cancers. The segmented lumens are rescaled to reduce computational complexity and data augmentation by scaling,
rotating, and flipping the rescaled image is applied to avoid overfitting. We evaluate the proposed method using two
tissue microarrays (TMA) – TMA1 includes 162 tissue specimens (73 Benign and 89 Cancer) and TMA2 comprises 185
tissue specimens (70 Benign and 115 Cancer). In cross-validation on TMA1, the proposed method achieved an AUC of
0.95 (CI: 0.93-0.98). Trained on TMA1 and tested on TMA2, CNN obtained an AUC of 0.95 (CI: 0.92-0.98). This
demonstrates that the proposed method can potentially improve prostate cancer pathology.
Histopathology forms the gold standard for cancer diagnosis and therapy, and generally relies on manual examination of microscopic structural morphology within tissue. Fourier-Transform Infrared (FT-IR) imaging is an emerging vibrational spectroscopic imaging technique, especially in a High-Definition (HD) format, that provides the spatial specificity of microscopy at magnifications used in diagnostic surgical pathology. While it has been shown for standard imaging that IR absorption by tissue creates a strong signal where the spectrum at each pixel is a quantitative “fingerprint” of the molecular composition of the sample, here we show that this fingerprint also enables direct digital pathology without the need for stains or dyes for HD imaging. An assessment of the potential of HD imaging to improve diagnostic pathology accuracy is presented.
Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.
Digitized histopathology images have a great potential for improving or facilitating current assessment tools in cancer
pathology. In order to develop accurate and robust automated methods, the precise segmentation of histologic objects
such epithelium, stroma, and nucleus is necessary, in the hopes of information extraction not otherwise obvious to the
subjective eye. Here, we propose a multivew boosting approach to segment histology objects of prostate tissue. Tissue
specimen images are first represented at different scales using a Gaussian kernel and converted into several forms such
HSV and La*b*. Intensity- and texture-based features are extracted from the converted images. Adopting multiview
boosting approach, we effectively learn a classifier to predict the histologic class of a pixel in a prostate tissue specimen.
The method attempts to integrate the information from multiple scales (or views). 18 prostate tissue specimens from 4
patients were employed to evaluate the new method. The method was trained on 11 tissue specimens including 75,832
epithelial and 103,453 stroma pixels and tested on 55,319 epithelial and 74,945 stroma pixels from 7 tissue specimens.
The technique showed 96.7% accuracy, and as summarized into a receiver operating characteristic (ROC) plot, the area
under the ROC curve (AUC) of 0.983 (95% CI: 0.983-0.984) was achieved.