Experienced radiologists are in short supply, and are sometimes called upon to read many images in a short amount of
time. This leaves them with a limited amount of time to read images, and can lead to fatigue and stress which can be
sources of error, as they overlook subtle abnormalities that they otherwise might not miss. Another factor in error rates
is called satisfaction of search, where a radiologist misses a second (typically subtle) abnormality after finding the first.
These types of errors are due primarily to a lack of attention to an important region of the image during the search. In
this paper we discuss the use of eye tracker technology, in combination with image analysis and machine learning
techniques, to learn what types of features catch the eye experienced radiologists when reading chest x-rays for
diagnostic purposes, and to then use that information to produce saliency maps that predict what regions of each image
might be most interesting to radiologists. We found that, out of 13 popular features types that are widely extracted to
characterize images, 4 are particularly useful for this task: (1) Localized Edge Orientation Histograms (2) Haar
Wavelets, (3) Gabor Filters, and (4) Steerable Filters.