Nghi C. D. Truong,1 Chandan Ganesh Bangalore Yogananda,1 Benjamin C. Wagner,1 James M. Holcomb,1 Divya Reddy,1 Niloufar Saadat,1 Kimmo J. Hatanpaa,1 Toral R. Patel,1 Baowei Feihttps://orcid.org/0000-0002-9123-9484,1,2 Matthew D. Lee,3 Rajan Jain,3 Richard J. Bruce,4 Marco C. Pinho,1 Ananth J. Madhuranthakam,1 Joseph A. Maldjian1
1The Univ. of Texas Southwestern Medical Ctr. (United States) 2The Univ. of Texas at Dallas (United States) 3New York Univ. Grossman School of Medicine (United States) 4Univ. of Wisconsin-Madison (United States)
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This approach can be a potential solution to mitigate the data scarcity problem and enhance training data availability. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. The framework comprises two components: a 3D autoencoder model for perceptual compression and a conditional 3D Diffusion Probabilistic Model (DPM) for generating high-quality and diverse multi-contrast brain tumor MRI samples, guided by a conditional tumor mask. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the DPM to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. We trained our models using two brain tumor datasets: The Cancer Genome Atlas (TCGA) public dataset and an internal dataset from the University of Texas Southwestern Medical Center (UTSW). The models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. The quality of the generated images was evaluated using the Fréchet Inception Distance (FID) score. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Nghi C. D. Truong, Chandan Ganesh Bangalore Yogananda, Benjamin C. Wagner, James M. Holcomb, Divya Reddy, Niloufar Saadat, Kimmo J. Hatanpaa, Toral R. Patel, Baowei Fei, Matthew D. Lee, Rajan Jain, Richard J. Bruce, Marco C. Pinho, Ananth J. Madhuranthakam, Joseph A. Maldjian, "Synthesizing 3D multicontrast brain tumor MRIs using tumor mask conditioning," Proc. SPIE 12931, Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications, 129310M (2 April 2024); https://doi.org/10.1117/12.3009331