The color rendering of whole-slide images (WSIs) depends on factors involving the sample, such as tissue type, preparation methods, staining type and staining protocol, as well as equipment, such as the WSI scanner, WSI viewer, and WSI display. Variations in any of these steps may change the color rendering and therefore affect the performance of pathologists in the interpretation of WSIs and the robustness of artificial intelligence algorithms. In the literature, color normalization techniques have been proposed to reduce the color variations. The purpose of this work is to develop an objective approach to characterizing color normalization methods used in digital pathology. We employed color normalization methods to normalize the color rendered by a WSI scanner and then compared the normalized color with the actual scan by that scanner. The normalization errors were evaluated on the pixel level using the CIE color difference ΔE metric that have been shown to correlate with visually perceived differences in human vision. A selected set of 310 patch images of breast tissues scanned by two scanners from the ICPR 2014 MITOS & ATYPIA contest was used. Images from one scanner were color normalized to match the color rendering of the other scanner. Four color normalization methods were compared – Macenko, Reinhard, Vahadane, and StainGAN. Experimental results show that average color differences between two scanners in terms of ΔE were reduced from 16.2 before normalization to the range of [13.7,16.9] after normalization for the Macenko, Reinhard, Vahadane methods, and to 8.3 for the StainGAN method. Apparently the StainGAN method is significantly superior to the other three methods in terms of the ΔE metric. As such, we demonstrated a quantitative method for objectively evaluating color normalization techniques. Future work is needed to explore the relationship of the color fidelity measure and the impact of color normalization on pathologist and AI performance in clinical tasks.
We evaluate the use of TernausNet V2, a pre-trained VGG-16 U-net for segmentation of Green Fluorescent Protein (GFP) stained stem cells from giga pixel fluorescence microscopy images. Fluorescence microscopy is a difficult modality for automated stem cell segmentation algorithms due to high noise and low contrast. As such segmentation algorithms for cell counting and tracking typically yield more consistent results in other imaging modalities such as Phase Contrast (PC) microscopy due to greater ability to distinguish between foreground and background. Recent methods have shown that U-net based models can achieve state-of-the-art segmentation performance of GFP microscopy, although all available methods continue to overly segment the protein features and have difficulty capturing the entirety the cell. We investigate the use of TernausNet, a VGG-16 based U-Net architecture that was pre-trained from ImageNet and show that it is able to improve the accuracy of GFP stem cell segmentation on gigascale NIST fluorescence microscopy images in comparison to a baseline U-net model. Quantitative results show that the proposed TernausNet V2 architecture model is able to better distinguish the entire region of the cell and reduce overly segmenting proteins as compared to U-net. TernausNet achieved greater accuracy with ROC AUC of 0.956 and F1-Score of 0.810 as compared to the baseline U-net with AUC 0.936 and F1-Score 0.775. Therefore, we suggest that the TernausNet V2 architecture with transfer learning improves the performance of stem-cell segmentation is able to outperform U-net models for the segmentation of giga pixel GFP stained fluorescence microscopy images.