Presentation + Paper
4 April 2022 Interpretable deep learning models for better clinician-AI communication in clinical mammography
Author Affiliations +
Abstract
There is increasing interest in using deep learning and computer vision to help guide clinical decisions, such as whether to order a biopsy based on a mammogram. Existing networks are typically black box, unable to explain how they make their predictions. We present an interpretable deep-learning network which explains its predictions in terms of BI-RADS features mass shape and mass margin. Our model predicts mass margin and mass shape, then uses the logits from those interpretable models to predict malignancy, also using an interpretable model. The interpretable mass margin model explains its predictions using a prototypical parts model. The interpretable mass shape model predicts segmentations, fits an ellipse, then determines shape based on the goodness of fit and eccentricity of the fitted ellipse. While including mass shape logits in the malignancy prediction model did not improve performance, we present this technique as part of a framework for better clinician-AI communication.
Conference Presentation
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Alina Jade Barnett, Vaibhav Sharma, Neel Gajjar, Jerry Fang, Fides Schwartz M.D., Chaofan Chen, Joseph Y. Lo, and Cynthia Rudin "Interpretable deep learning models for better clinician-AI communication in clinical mammography", Proc. SPIE 12035, Medical Imaging 2022: Image Perception, Observer Performance, and Technology Assessment, 1203507 (4 April 2022); https://doi.org/10.1117/12.2612372
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Mammography

Image segmentation

Breast

Biopsy

Radiology

Computer science

Back to Top