Chest X-rays are among the most common modalities in medical imaging. Technical flaws of these images, such as over- or under-exposure or wrong positioning of the patients can result in a decision to reject and repeat the scan. We propose an automatic method to detect images that are not suitable for diagnostic study. If deployed at the point of image acquisition, such a system can warn the technician, so the repeat image is acquired without the need to bring the patient back to the scanner. We use a deep neural network trained on a dataset of 3487 images labeled by two experienced radiologists to classify the images as diagnostic or non-diagnostic. The DenseNet121 architecture is used for this classification task. The trained network has an area under the receiver operator curve (AUC) of 0.93. By removing the X-rays with diagnostic quality issues, this technology could potentially provide significant cost savings for hospitals.
Chest X-rays (CXRs) are among the most commonly used medical image modalities. They are mostly used for screening, and an indication of disease typically results in subsequent tests. As this is mostly a screening test used to rule out chest abnormalities, the requesting clinicians are often interested in whether a CXR is normal or not. A machine learning algorithm that can accurately screen out even a small proportion of the “real normal” exams out of all requested CXRs would be highly beneficial in reducing the workload for radiologists. In this work, we report a deep neural network trained for classifying CXRs with the goal of identifying a large number of normal (disease-free) images without risking the discharge of sick patients. We use an ImageNet-pretrained Inception-ResNet-v2 model to provide the image features, which are further used to train a model on CXRs labelled by expert radiologists. The probability threshold for classification is optimized for 100% precision for the normal class, ensuring no sick patients are released. At this threshold we report an average recall of 50%. This means that the proposed solution has the potential to cut in half the number of disease-free CXRs examined by radiologists, without risking the discharge of sick patients.
Age prediction based on appearances of different anatomies in medical images has been clinically explored for many decades. In this paper, we used deep learning to predict a person’s age on Chest X-Rays. Specifically, we trained a CNN in regression fashion on a large publicly available dataset. Moreover, for interpretability, we explored activation maps to identify which areas of a CXR image are important for the machine (i.e. CNN) to predict a patient’s age, offering insight. Overall, amongst correctly predicted CXRs, we see areas near the clavicles, shoulders, spine and mediastinum being most activated for age prediction, as one would expect biologically. As CXR is the most commonly requested imaging exam, a potential use case for estimating age may be found in the preventative counselling of patient health status compared to their age-expected average, particularly when there is a large discrepancy between predicted age and the real patient age.