Translator Disclaimer
7 March 2018 Feasibility study of deep convolutional generative adversarial networks to generate mammography images
Author Affiliations +
We conducted a feasibility study to generate mammography images using a deep convolutional generative adversarial network (DCGAN), which directly produces realistic images without 3-D model passing through any complex rendering algorithm, such as ray tracing. We trained DCGAN with breast 2D mammography images, which were generated from anatomical noise. The generated X-ray mammography images were successful in that the image preserves reasonable quality and retains the visual patterns similar to training images. Especially, generated images share the distinctive structure of training images. For the quantitative evaluation, we used the mean and variance of beta values of generated images and observed that they are very similar to those of training images. Although the general distribution of generated images matches well with those of training images, there are several limitations of the DCGAN. First, checkboard pattern like artifacts are found in generated images, which is a well-known issue of deconvolution algorithm. Moreover, training GAN is often unstable so to require manual fine-tunes. To overcome such limitations, we plan to extend our idea to conditional GAN approach for improving training stability, and employ an auto-encoder for handling artifacts. To validate our idea on real data, we will apply clinical images to train the network. We believe that our framework can be easily extended to generate other medical images.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Gihun Kim, Hyunjung Shim, and Jongduk Baek "Feasibility study of deep convolutional generative adversarial networks to generate mammography images", Proc. SPIE 10577, Medical Imaging 2018: Image Perception, Observer Performance, and Technology Assessment, 105771C (7 March 2018);

Back to Top