Robust groupwise registration methods are important for the analysis of large medical image datasets. We build upon the concept of deforming autoencoders that decouples shape and appearance to represent anatomical variability in a robust and plausible manner. In this work we propose a deep learning model that is trained to generate templates and deformation fields. It employs a joint encoder block which provides latent representations for both shape and appearance and is followed by two independent shape and appearance decoder paths. The model achieves image reconstruction by warping the template provided by the appearance decoder with the estimated warping field provided by the shape encoder. By restricting the embedding to a low-dimensional latent code, we are able to obtain meaningful deformable templates. Our objective function ensures smooth and realistic deformation fields. It contains an invertibility loss term, which is novel for deforming autoencoders and induces backward consistency. This should ensure that warping the reconstructed image with the deformation field ideally results in in the template. In addition, warping the template with the reversed deformation field should ideally produce the reconstructed image. We demonstrate the potential of our approach for application to two- and three-dimensional medical image data by training and evaluating it on labeled MRI brain scans. We show that adding the inverse consistency penalty to the objective function leads to improved and more robust registration results. When evaluated on unseen data with expert labels for accuracy estimation our three-dimensional model achieves substantially increased Dice scores by 5 percentage points.
The alert did not successfully save. Please try again later.
Hanna Siebert, Kumar T. Rajamani, Mattias P. Heinrich, "Learning inverse consistent 3D groupwise registration with deforming autoencoders," Proc. SPIE 11596, Medical Imaging 2021: Image Processing, 115960F (15 February 2021); https://doi.org/10.1117/12.2581948