



We study a problem scenario of super-resolution (SR) algorithms in the context of whole slide imaging (WSI), a popular imaging modality in digital pathology. Instead of just one pair of high- and low-resolution images, which is typically the setup in which SR algorithms are designed, we are given multiple intermediate resolutions of the same image as well. The question remains how to best utilize such data to make the transformation learning problem inherent to SR more tractable and address the unique challenges that arises in this biomedical application. We propose a recurrent convolutional neural network model, to generate SR images from such multi-resolution WSI datasets. Specifically, we show that having such intermediate resolutions is highly effective in making the learning problem easily trainable and address large resolution difference in the low and high-resolution images common in WSI, even without the availability of a large size training data. Experimental results show state-of-the-art performance on three WSI histopathology cancer datasets, across a number of metrics.

The excited state lifetime of a fluorophore together with its fluorescence emission spectrum provide information that can yield valuable insights into the nature of a fluorophore and its microenvironment. However, it is difficult to obtain both channels of information in a conventional scheme as detectors are typically configured either for spectral or lifetime detection. We present a fiber-based method to obtain spectral information from a multiphoton fluorescence lifetime imaging (FLIM) system. This is made possible using the time delay introduced in the fluorescence emission path by a dispersive optical fiber coupled to a detector operating in time-correlated single-photon counting mode. This add-on spectral implementation requires only a few simple modifications to any existing FLIM system and is considerably more cost-efficient compared to currently available spectral detectors.








View contact details