7 July 2016 Visualizing and enhancing a deep learning framework using patients age and gender for chest x-ray image retrieval
Author Affiliations +
We explore the combination of text metadata, such as patients’ age and gender, with image-based features, for X-ray chest pathology image retrieval. We focus on a feature set extracted from a pre-trained deep convolutional network shown in earlier work to achieve state-of-the-art results. Two distance measures are explored: a descriptor-based measure, which computes the distance between image descriptors, and a classification-based measure, which performed by a comparison of the corresponding SVM classification probabilities. We show that retrieval results increase once the age and gender information combined with the features extracted from the last layers of the network, with best results using the classification-based scheme. Visualization of the X-ray data is presented by embedding the high dimensional deep learning features in a 2-D dimensional space while preserving the pairwise distances using the t-SNE algorithm. The 2-D visualization gives the unique ability to find groups of X-ray images that are similar to the query image and among themselves, which is a characteristic we do not see in a 1-D traditional ranking.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yaron Anavi, Yaron Anavi, Ilya Kogan, Ilya Kogan, Elad Gelbart, Elad Gelbart, Ofer Geva, Ofer Geva, Hayit Greenspan, Hayit Greenspan, "Visualizing and enhancing a deep learning framework using patients age and gender for chest x-ray image retrieval", Proc. SPIE 9785, Medical Imaging 2016: Computer-Aided Diagnosis, 978510 (7 July 2016); doi: 10.1117/12.2217587; https://doi.org/10.1117/12.2217587

Back to Top