With the rapid advancement in multiplex tissue staining, computer hardware, and machine learning, computationally-based tools are becoming indispensable for the evaluation of digital histopathology. Historically, standard histochemical staining methods such as hematoxylin and eosin, periodic acid- Schiff, and trichrome have been the gold standard for microscopic tissue evaluation by pathologists, and therefore brightfield microscopy images derived from such stains are primarily used for developing computational pathology tools. However, these histochemical stains are nonspecific in terms of highlighting structures and cell types. In contrast, immunohistochemical stains use antibodies to specifically detect and quantify proteins, which can be used to specifically highlight structures and cell types of interest. Traditionally, such immunofluorescence-based methods are only able to simultaneously stain a limited number of target proteins/antigens, typically up to three channels. Fluorescence-based multiplex immunohistochemistry (mIHC) is a new technology that enables simultaneous localization and quantification of numerous proteins/antigens, allowing for the possibility to detect a wide range of histologic structures and cell types within tissue. However, this method is limited by cost, specialized equipment, technical expertise, and time. In this study, we implemented a deep learning-based pipeline to synthetically generate in silico mIHC images from brightfield images of tissue slides-stained with routinely used histochemical stains, in particular PAS. Our tool was trained using fluorescence-based mIHC images as the ground-truth. The proposed pipeline offers high contrast detection of structures in brightfield imaged tissue sections stained with standard histochemical stains. We demonstrate the performance of our pipeline by computationally detecting multiple compartments in kidney biopsies, including cell nuclei, collagen/fibrosis, distal tubules, proximal tubules, endothelial cells, and leukocytes, from PAS-stained tissue sections. Our work can be extended for other histologic structures and tissue types and can be used as a basis for future automated annotation of histologic structures and cell types without the added cost of actually generating mIHC slides.
Generative adversarial networks (GANs) have received immense attention in the field of machine learning for their potential to learn high-dimensional and real data distribution. These methods do not rely on any assumptions about the data distribution of the input sample and can generate real-like samples from latent vector space based on unsupervised learning. In the medical field, particularly, in digital pathology expert annotation and availability of a large set of training data is costly and the study of manifestations of various diseases is based on visual examination of stained slides. In clinical practice, various staining information is required to improve the pathological diagnosis process. But when the sampled tissue to be examined is limited, the final diagnosis made by the pathologist is based on limited stain styles. These limitations can be overcome by studying the usability and reliability of generative models in the field of digital pathology. To understand the usability of the generative models, we synthesize in an unsupervised manner, high resolution renal microanatomical structures like renal glomerulus in thin tissue histology images using state-of-art architectures like Deep Convolutional Generative Adversarial Network (DCGAN) and Enhanced Super- Resolution Generative Adversarial Network (ESRGAN). Successful generation of such structures will lead to obtaining a large set of labeled data for further developing supervised algorithms for disease classification and understanding progression. Our study suggests while GAN is able to attain formalin fixed and paraffin embedded tissue image quality, GAN requires further prior knowledge as input to model intrinsic micro-anatomical details, such as capillary wall, urinary pole, nuclei placement, suggesting developing semi-supervised architectures by using these above details as prior information. Also, the generative models can be used to create an artificial effect of staining without physically tampering the histopathological slide. To demonstrate this, we use a CycleGAN network to transform Hematoxylin and eosin (H&E) stain to Periodic acid-Schiff (PAS) stain and Jones methenamine silver (JMS) stain to PAS stain. In this way GAN can be employed to translate different renal pathology stain styles when the relevant staining information is not available in the clinical settings.