Proc. SPIE. 9784, Medical Imaging 2016: Image Processing
KEYWORDS: Image segmentation, 3D modeling, Prostate, Magnetic resonance imaging, Machine learning, 3D image processing, Pattern recognition, Image analysis, Data modeling, Principal component analysis, Statistical modeling, Cancer
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
Prostate cancer is often over treated with standard treatment options which impact the patients’ quality of life. Laser ablation has emerged as a new approach to treat prostate cancer while sparing the healthy tissue around the tumor. Since laser ablation has a small treatment zone with high temperature, it is necessary to use accurate image guidance and treatment planning to enable full ablation of the tumor. Intraoperative temperature monitoring is also desirable to protect critical structures from being damaged in laser ablation. In response to these problems, we developed a navigation platform and integrated it with a clinical MRI scanner and a side firing laser ablation device. The system allows imaging, image guidance, treatment planning and temperature monitoring to be carried out on the same platform. Temperature sensing phantoms were developed to demonstrate the concept of iterative treatment planning and intraoperative temperature monitoring. Retrospective patient studies were also conducted to show the clinical feasibility of the system.
In an effort to improve the accuracy of transrectal ultrasound (TRUS)-guided needle biopsies of the prostate, it is
important to understand the non-rigid deformation of the prostate. To understand the deformation of the prostate when
an endorectal coil (ERC) is inserted, we develop an elastic registration framework to register prostate MR images with
and without ERC. Our registration framework uses robust point matching (RPM) to get the correspondence between the
surface landmarks in the source and target volumes followed by elastic body spline (EBS) registration based on the
corresponding landmark pairs. Together with the manual rigid alignment, we compared our registration framework
based on pure surface landmarks to the registration based on both surface and internal landmarks in the center of the
prostate. In addition, we assessed the impact of constraining the warping in the central zone of the prostate using a
Gaussian weighting function. Our results show that elastic surface-driven prostate registration is feasible, and that
internal landmarks further improve the registration in the central zone while they have little impact on the registration in
the peripheral zone of the prostate. Results varied case by case depending on the accuracy of the prostate segmentation
and the amount of warping present in each image pair. The most accurate results were obtained when using a Gaussian
weighting in the central zone to limit the EBS warping driven by surface points. This suggests that a Gaussian constrain
of the warping can effectively compensate for the limitations of the isotropic EBS deformation model, and for erroneous
warping inside the prostate created by inaccurate surface landmarks driving the EBS.
To understand the role of the tongue in speech production, it is desirable to directly image the motion and
strain of the muscles within the tongue. Magnetic resonance
tagging-which was originally developed for cardiac
imaging-has previously been applied to image both two-dimensional and three-dimensional tongue motion during
speech. However, to quantify three-dimensional motion and strain, multiple images yielding two-dimensional
motion must be acquired at different orientations and then interpolated - a time-consuming task both in image
acquisition and processing. Recently, a new MR imaging and image processing method called zHARP was
developed to encode and track 3D motion from a single slice without increasing acquisition time. zHARP was
originally developed and applied to cardiac imaging. The application of zHARP to the tongue is not straightforward
because the tongue in repetitive speech does not move as consistently as the heart in its beating cycle.
Therefore tongue images are more susceptible to motion artifacts. Moreover, these artifacts are greatly exaggerated
as compared to conventional tagging because of the nature of zHARP acquisition. In this work, we
re-implemented the zHARP imaging sequence and optimized it for the tongue motion analysis. We also optimized
image acquisition by designing and developing a specialized MRI scanner triggering method and vocal repetition
to better synchronize speech repetitions. Our method was validated using a moving phantom. Results of 3D
motion tracking and strain analysis on the tongue experiments demonstrate the effectiveness of this method.
Reconstruction methods for MR imaging of dynamic objects have traditionally been analyzed using the projection slice theorem. In this paper, we present a new theoretical framework for analyzing MR imaging of dynamic objects. Our framework reinterprets the object stationarity assumption in the MR reconstruction techniques as a combination of filtering and downsampling operations performed on the acquired k-space data. We have analyzed our results in x-f (spatial coordinate - temporal frequency) space using a time-sequential analysis. While the projection slice theorem has only be used to analyze the Cartesian sampling pattern, the new framework can analyze any arbitrary sampling pattern with a given reconstruction algorithm. Further, the new theoretical framework can be used to analyze the effect of relaxing the object stationarity assumption over the reconstructed MR images. We have demonstrated the use of our framework by analyzing two popular image reconstruction techniques, namely view-sharing and UNFOLD. In the analysis of view-sharing, we have confirmed the fact that interleaved and bit reversed k-space sampling patterns provide better artifact suppression for dynamic MR imaging. We propose using a different filter to further reduce artifacts in the reconstructed images. In the case of UNFOLD, we have analyzed the effect of relaxing the object stationarity assumption and have shown that it leads to an increase in motion artifacts.