Eye tracking data obtained from 8 radiologists (of varying experience levels in reading mammograms) reviewing 120 two-view digital mammography cases (59 cancers) have been used to train the model, which was pre-trained with the ImageNet dataset for transfer learning. Areas of the mammogram that received direct (foveally fixated), indirect (peripherally fixated) or no (never fixated) visual attention were extracted from radiologists’ visual search maps (obtained by a head mounted eye tracking device). These areas, along with the radiologists’ assessment (including confidence of the assessment) of suspected malignancy were used to model: 1) Radiologists’ decision; 2) Radiologists’ confidence on such decision; and 3) The attentional level (i.e. foveal, peripheral or none) obtained by an area of the mammogram. Our results indicate high accuracy and low misclassification in modelling such behaviours.
ACCESS THE FULL ARTICLE
Suneeta Mall, Patrick C. Brennan, Claudia Mello-Thoms, "A deep (learning) dive into visual search behaviour of breast radiologists," Proc. SPIE 10577, Medical Imaging 2018: Image Perception, Observer Performance, and Technology Assessment, 1057708 (7 March 2018);