The combination of molecular (hyperspectral imaging) and morphological (optical coherent tomography imaging) optical technologies helps in the assessment of biological tissue both in pathological diagnosis and in the follow-up treatments. The co-registration of both imaging features allows quantifying the presence of chromophores and the subsurface structure of tissue. This work proposes the fusion of two optical imaging technologies for the characterization of different types of tissues where the attenuation coefficient calculated from OCT imaging serves to track the presence of anomalies in the distribution of chromophores over the sample and therefore to diagnose pathological conditions. The performance of two customized hyperspectral imaging systems working in two complementary spectral ranges (VisNIR from 400 to 1000 nm, and SWIR 1000 to 1700 nm) and one commercial OCT system working at 1325 nm reveals the presence of fibrosis, collagen alterations and lipid content in cardiovascular tissues such as aortic walls (to assess on aneurysmal conditions) or tendinous chords (to diagnose the integrity of the valvular system) or in muscular diseases prone to fibrotic changes and inflammation.
With an adequate tissue dataset, supervised classification of tissue optical properties can be achieved in SFDI images of breast cancer lumpectomies with deep convolutional networks. Nevertheless, the use of a black-box classifier in current ex vivo setups provides output diagnostic images that are inevitably bound to show misclassified areas due to inter- and intra-patient variability that could potentially be misinterpreted in a real clinical setting. This work proposes the use of a novel architecture, the self-introspective classifier, where part of the model is dedicated to estimating its own expected classification error. The model can be used to generate metrics of self-confidence for a given classification problem, which can then be employed to show how much the network is familiar with the new incoming data. A heterogenous ensemble of four deep convolutional models with self-confidence, each sensitive to a different spatial scale of features, is tested on a cohort of 70 specimens, achieving a global leave-one-out cross-validation accuracy of up to 81%, while being able to explain where in the output classification image the system is most confident.