In this work, we proposed a non-linear observer model based on convolutional neural network and compare its performance with LG-CHO for four alternative forced choice detection task using simulated breast CT images. In our network, each convolutional layer contained 3×3 filters and a leaky-ReLU as an activation function, but a pooling layer and a zero padding to the output of each convolutional layer were not used unlike general convolutional neural network. Network training was conducted using ADAM optimizer with two design parameters (i.e., network depth and width). The optimal value of the design parameter was found by brute force searching, which spanned up to 30 for depth and 128 for channel, respectively. To generate training and validation dataset, we generated anatomical noise images using a power law spectrum of breast anatomy. 50% volume glandular fraction was assumed, and 1 mm diameter signal was used for detection task. The generated images were recon- structed using filtered back-projection with a fan beam CT geometry, and ramp and Hanning filters were used as an apodization filter to generate different noise structures. To train our network, 125,000 signal present images and 375,000 signal absent images were reconstructed for each apodization filter. To measure detectability, we used percent correction with 4,000 images, generated independently from training and validation dataset. Our results show that the proposed network composed of 30 layers and 64 channels provides higher detectability than LG-CHO. We believe that the improved detectability is achieved by the presence of the non-linear module (i.e., leaky-ReLU) in the network.
We conducted a feasibility study to generate mammography images using a deep convolutional generative adversarial network (DCGAN), which directly produces realistic images without 3-D model passing through any complex rendering algorithm, such as ray tracing. We trained DCGAN with breast 2D mammography images, which were generated from anatomical noise. The generated X-ray mammography images were successful in that the image preserves reasonable quality and retains the visual patterns similar to training images. Especially, generated images share the distinctive structure of training images. For the quantitative evaluation, we used the mean and variance of beta values of generated images and observed that they are very similar to those of training images. Although the general distribution of generated images matches well with those of training images, there are several limitations of the DCGAN. First, checkboard pattern like artifacts are found in generated images, which is a well-known issue of deconvolution algorithm. Moreover, training GAN is often unstable so to require manual fine-tunes. To overcome such limitations, we plan to extend our idea to conditional GAN approach for improving training stability, and employ an auto-encoder for handling artifacts. To validate our idea on real data, we will apply clinical images to train the network. We believe that our framework can be easily extended to generate other medical images.