Open Access
22 December 2018 Special Section Guest Editorial: Artificial Intelligence in Medical Imaging
Author Affiliations +
Abstract
This editorial provides an overview of the articles in the special section.

Artificial intelligence (AI) is big news. Healthcare, and in particular radiology, is perhaps more on the bandwagon than other fields, given the intuitive connections between images and convolutional neural networks.1 An impressive aspect of this phenomenon is the continuing proliferation of research studies on radiology applications and the steadily increasing number of methods already having an impact on patient care. This Special Section on Artificial Intelligence in Medical Imaging was designed to provide readers with examples of the medical imaging modalities, images, and tasks that AI is being applied to, as well as some of the approaches to AI and underlying methodologies. The section contains six articles that exemplify the diversity in AI approaches and hopefully will provide readers with an idea of the possibilities, limitations, and challenges facing this exciting and growing field.

The first article in the series, “Breast lesion classification based on dynamic contrast-enhanced magnetic resonance images sequences with long short-term memory networks,” by Antropova et al., presents a technique for breast lesion classification based on four-dimensional dynamic contrast-enhanced magnetic resonance images (DCE-MRI). The researchers used recurrent neural networks in combination with a pretrained convolutional neural network to capture two-dimensional image features as well as temporal enhancement patterns. They found that their method significantly outperformed a standard fine-tuning method, capturing clinically useful information.

Ghavami et al., in “Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images,” applied their technique in a very different area—tumor-targeted prostate cancer biopsy and treatment. Clinically this requires image guidance systems that register transrectal ultrasound (TRUS) prostate and MRI scans. The TRUS component, however, requires significant manual input that can be quite variable, so they wanted to automate it using a CNN. Once developed, it was tested on a cohort of 109 patients. They used a variety of metrics in the assessment and interestingly found that incorporating neighboring slices did not improve segmentation performance overall, but their up-sampling shortcuts reduced the overall training time to 161 min, from 253 min.

Most of the studies in this series applied their algorithmic techniques to images, but Holbrook et al. trained an algorithm for “Overcoming detector limitations of x-ray photon counting for preclinical microcomputed tomography.” Spectral CT using photon counting detectors (PCDs) is an exciting technology that provides tissue composition measurements using the energy dependence of x-ray attenuation in different materials that could eventually be used in theranostic applications. PCDs are especially suited for K-edge imaging, revealing the spatial distribution of select imaging probes through quantitative material decomposition. The study used a prototype spectral micro-CT system with a CZT-based PCD and a sophisticated iterative algorithm to reconstruct phantom and ex vivo mouse data. The CNN was implemented to achieve preclinically relevant spatial resolution and they were able to successfully recover a high-resolution estimate of the spectral contrast suitable for material decomposition.

The investigation by Hansch et al. looked at a challenging task for radiologists—“Evaluation of deep learning methods for parotid gland segmentation from CT images.” They used two-dimensional, two-dimensional ensemble, and three-dimensional U-Nets for this segmentation task and found a mean Dice coefficient similarity of 0.83 for the three models compared to ground truth. To reduce false positives, a patch-based approach for class balancing was implemented. They were able to generalize their results to an independent dataset and evaluated performance after training with different-sized training sets. They found no significant increase in the Dice coefficient for more than 250 training cases.

Feng et al. summarize their experience with a “Fully connected neural network for virtual monochromatic imaging in spectral computed tomography.” Spectral computed tomography (SCT) for multi-energy material decomposition for material discrimination and quantitative image reconstruction has many potential uses, but it is difficult to obtain precise system spectral models. Such models are often critical to decomposition performance but are challenging to develop and calibrate, since they involve many detector physical effects such as charge sharing, pulse pileup, and characteristic x-ray escape. Their study utilizes a spectral information extraction method for creating virtual monochromatic attenuation maps, using a simple fully connected neural network without requiring knowledge of spectral information. They found that it provided good performance for denoising and artifact suppression.

The final research paper by Yap et al., “Breast ultrasound lesions recognition: end-to-end deep learning approaches,” was the second breast application but this time with US instead of MRI. This study used end-to-end deep learning approaches with fully convolutional networks (FCN-AlexNet, FCN-32s, FCN-16s, FCN-8s) for semantic segmentation of breast lesions. Two datasets with benign and malignant lesions were used in the evaluation. The method performed better on benign lesions than on malignant ones, which is not surprising given the difficulty radiologists have with this task, and that benign lesions tend to be more readily classified correctly.

It was interesting to review these six articles and appreciate the variety of approaches being proposed and tested in so many different images and tasks. It is difficult to predict which ones will actually end up being implemented clinically as none here have gone as far as to determine how they will be integrated into the clinical workflow and, more importantly, whether and how they will actually impact radiologists’ diagnostic accuracy and efficiency, and thereby patient outcomes. We look forward to future articles on these and related topics, and hopefully this short series of exemplary articles will inspire others to develop and validate even more tools for other medical imaging application areas.

Reference

1. 

A. Krizhevsky, I. Sutskever and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., 25 1097 –1105 (2012). Google Scholar
© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE)
Elizabeth A. Krupinski, Paul E. Kinahan, and Patrick La Riviere "Special Section Guest Editorial: Artificial Intelligence in Medical Imaging," Journal of Medical Imaging 6(1), 011001 (22 December 2018). https://doi.org/10.1117/1.JMI.6.1.011001
Published: 22 December 2018
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Artificial intelligence

Medical imaging

Image segmentation

Breast

Radiology

Convolutional neural networks

Image classification

Back to Top