In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient’s head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.
In SPECT imaging, motion from respiration and body motion can reduce image quality by introducing motion-related
artifacts. A minimally-invasive way to track patient motion is to attach external markers to the patient’s body and record
their location throughout the imaging study. If a patient exhibits multiple movements simultaneously, such as respiration
and body-movement, each marker location data will contain a mixture of these motions. Decomposing this complex
compound motion into separate simplified motions can have the benefit of applying a more robust motion correction to
the specific type of motion. Most motion tracking and correction techniques target a single type of motion and either
ignore compound motion or treat it as noise. Few methods account for compound motion exist, but they fail to
disambiguate super-position in the compound motion (i.e. inspiration in addition to body movement in the positive
anterior/posterior direction). We propose a new method for decomposing the complex compound patient motion using an
unsupervised learning technique called Independent Component Analysis (ICA). Our method can automatically detect
and separate different motions while preserving nuanced features of the motion without the drawbacks of previous
methods. Our main contributions are the development of a method for addressing multiple compound motions, the novel
use of ICA in detecting and separating mixed independent motions, and generating motion transform with 12 DOFs to
account for twisting and shearing. We show that our method works with clinical datasets and can be employed to improve
motion correction in single photon emission computed tomography (SPECT) images.
In SPECT imaging, motion from patient respiration and body motion can introduce image artifacts that may reduce the diagnostic quality of the images. Simulation studies using numerical phantoms with precisely known motion can help to develop and evaluate motion correction algorithms. Previous methods for evaluating motion correction algorithms used either manual or semi-automated segmentation of MRI studies to produce patient models in the form of XCAT Phantoms, from which one calculates the transformation and deformation between MRI study and patient model. Both manual and semi-automated methods of XCAT Phantom generation require expertise in human anatomy, with the semiautomated method requiring up to 30 minutes and the manual method requiring up to eight hours. Although faster than manual segmentation, the semi-automated method still requires a significant amount of time, is not replicable, and is subject to errors due to the difficulty of aligning and deforming anatomical shapes in 3D. We propose a new method for matching patient models to MRI that extends the previous semi-automated method by eliminating the manual non-rigid transformation. Our method requires no user supervision and therefore does not require expert knowledge of human anatomy to align the NURBs to anatomical structures in the MR image. Our contribution is employing the SIMRI MRI simulator to convert the XCAT NURBs to a voxel-based representation that is amenable to automatic non-rigid registration. Then registration is used to transform and deform the NURBs to match the anatomy in the MR image. We show that our automated method generates XCAT Phantoms more robustly and significantly faster than the previous semi-automated method.
Proc. SPIE. 8317, Medical Imaging 2012: Biomedical Applications in Molecular, Structural, and Functional Imaging
KEYWORDS: Data modeling, Magnetic resonance imaging, Image segmentation, Heart, 3D modeling, Image registration, Algorithm development, Motion models, Single photon emission computed tomography, Affine motion model
In SPECT imaging, patient respiratory and body motion can cause artifacts that degrade image quality. Developing and
evaluating motion correction algorithms are facilitated by simulation studies where a numerical phantom and its motion
are precisely known, from which image data can be produced. Previous techniques to test motion correction methods
generated XCAT phantoms modeled from MRI studies and motion tracking but required manually segmenting the major
structures within the whole upper torso, which can take 8 hours to perform. Additionally, segmentation in two
dimensional MRI slices and interpolating into three dimensional shapes can lead to appreciable interpolation artifacts as
well as requiring expert knowledge of human anatomy in order to identify the regions to be segmented within each slice.
We propose a new method that mitigates the long manual segmentation times for segmenting the upper torso. Our
interactive method requires that a user provide only an approximate alignment of the base anatomical shapes from the
XCAT model with an MRI data. Organ boundaries from aligned XCAT models are warped with displacement fields
generated from registering a baseline MR image to MR images acquired during pre-determined motions, which amounts
to automated segmentation each organ of interest. With our method we can show the quality of segmentation is equal
that of expert manual segmentation does not require a user who is an expert in anatomy, and can be completed in
minutes not hours. In some instances, due to interpolation artifacts, our method can generate higher quality models than