The objective optimization of image-derived statistics, including the test statistic of an observer for specific decision tasks, requires a characterization of all sources of variability in the measured data. To accomplish this, it is necessary to establish a stochastic object model (SOM) that describes the variability within a group of objects to-be imaged. In order for the SOM to be realistic, it is desirable to establish it by use of experimental image data, as opposed to establishing it in a non-data-driven manner. Deep learning methods that employ generative adversarial networks (GANs) hold promise for learning SOMs that can generate images that match distributions of training image data. However, because experimental data recorded by an imaging system represent noisy and indirect measurements of the object, conventional GANs cannot be directly employed for this task. Recently, an augmented GAN architecture named AmbientGAN was proposed that can characterize a distribution of images from noisy and indirect measurements of them and knowledge of the measurement operator. In this work, for the first time, we investigate AmbientGANs for establishing SOMs by use of noisy imaging measurements. A canonical tomographic imaging system that is described by a two-dimensional Radon transform model is investigated. The AmbientGAN is evaluated by performing binary signal detection tasks that employ the generated images and true images.
Weimin Zhou, Sayantan Bhadra, Frank Brooks, and Mark A. Anastasio, "Learning stochastic object model from noisy imaging measurements using AmbientGANs," Proc. SPIE 10952, Medical Imaging 2019: Image Perception, Observer Performance, and Technology Assessment, 109520M (Presented at SPIE Medical Imaging: February 21, 2019; Published: 4 March 2019); https://doi.org/10.1117/12.2512633.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the proceedings. They include the speaker's narration with video of the slides and animations. Most include full-text papers. Interactive, searchable transcripts and closed captioning are now available for most presentations.
Search our growing collection of more than 18,000 conference presentations, including many plenaries and keynotes.