Patient motion during medical imaging can create significant degradation of images acquired in a clinical setting. Even breathing induced patient motion often leads to blurred imagery compromising its resolution and diagnostic utility. External motion tracking (EMT) technologies are one current method of tracking patient motion, but the current EMTs use radiation that is reflected off clothing or fixed markers, thus tracking only the patient’s garments. Researchers at U. Mass Worcester’s Chan Medical School and U. Mass Lowell’s Biomedical Terahertz Technology Center are seeking novel EMTs that use a part of the electromagnetic spectrum where clothing is transparent, designated as the millimeter wavelength region. For this purpose, the U. Mass. Team has developed a 75 GHz continuous-wave stepped frequency radar with 8 GHz bandwidth to investigate the system as a source-receiver tracking technology.
Millimeter-wave technologies for the automotive industry are driving inexpensive source/receiver hardware solutions for a wide variety of applications. In order to accurately assess signature characteristics of various scenes, we tested the appropriateness of using an artificial torso in controlled environments and compared the results to data from live subjects. High-range resolution (HRR) backscatter Radar Cross Section (RCS) data from targets and in-scene calibration objects were obtained using a 75GHz transceiver with 8GHz bandwidth. Data was collected for both the artificial torso and live subjects at varying aspects in controlled environments – this included studying the RCS response at different illumination angles while calibrating the response using in-scene calibration targets. Comparing the HRR profiles has allowed UML/UMMS researchers to accurately assess and demonstrate the utilization of artificial constructs in scenes for testing the system response characteristics.
Transmission measurements of 11 different garments composed of different materials and different thickness under different conditions were measured. The setup consisted of a 100 Gigahertz camera system which used an IMPATT diode (66mW power output) as the source, a 32x32 image sensor array (1.5x1.5mm pixels, 1 nW/√Hz Noise Equivalent Power) focused with PTFE lens (50mm focal length). The camera system was configured for reflection imaging by placing the source emitter and imaging array at an off-axis angle and focused on a large flat mirrored surface. To simulate reflection of the emitted signal off human skin after transmission through the garments, we placed the garments over the mirrored surface. We then calculated the transmission loss, in terms of signal strength (amplitude), as the ratio of the recorded images with and without the garments. The materials and make-up of the garments were recorded, such as colors, accents, and thickness. To increase the realism of the data, we added several conditions for each garment transmission recording that included overlapping wrinkles and multiple garment layers. We were able to confirm transmission results reported from other research groups, but found that variations such as wrinkles and multiple layers can change the transmission ratios significantly.
In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient’s head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.
In SPECT imaging, motion from respiration and body motion can reduce image quality by introducing motion-related
artifacts. A minimally-invasive way to track patient motion is to attach external markers to the patient’s body and record
their location throughout the imaging study. If a patient exhibits multiple movements simultaneously, such as respiration
and body-movement, each marker location data will contain a mixture of these motions. Decomposing this complex
compound motion into separate simplified motions can have the benefit of applying a more robust motion correction to
the specific type of motion. Most motion tracking and correction techniques target a single type of motion and either
ignore compound motion or treat it as noise. Few methods account for compound motion exist, but they fail to
disambiguate super-position in the compound motion (i.e. inspiration in addition to body movement in the positive
anterior/posterior direction). We propose a new method for decomposing the complex compound patient motion using an
unsupervised learning technique called Independent Component Analysis (ICA). Our method can automatically detect
and separate different motions while preserving nuanced features of the motion without the drawbacks of previous
methods. Our main contributions are the development of a method for addressing multiple compound motions, the novel
use of ICA in detecting and separating mixed independent motions, and generating motion transform with 12 DOFs to
account for twisting and shearing. We show that our method works with clinical datasets and can be employed to improve
motion correction in single photon emission computed tomography (SPECT) images.
In SPECT imaging, motion from patient respiration and body motion can introduce image artifacts that may reduce the diagnostic quality of the images. Simulation studies using numerical phantoms with precisely known motion can help to develop and evaluate motion correction algorithms. Previous methods for evaluating motion correction algorithms used either manual or semi-automated segmentation of MRI studies to produce patient models in the form of XCAT Phantoms, from which one calculates the transformation and deformation between MRI study and patient model. Both manual and semi-automated methods of XCAT Phantom generation require expertise in human anatomy, with the semiautomated method requiring up to 30 minutes and the manual method requiring up to eight hours. Although faster than manual segmentation, the semi-automated method still requires a significant amount of time, is not replicable, and is subject to errors due to the difficulty of aligning and deforming anatomical shapes in 3D. We propose a new method for matching patient models to MRI that extends the previous semi-automated method by eliminating the manual non-rigid transformation. Our method requires no user supervision and therefore does not require expert knowledge of human anatomy to align the NURBs to anatomical structures in the MR image. Our contribution is employing the SIMRI MRI simulator to convert the XCAT NURBs to a voxel-based representation that is amenable to automatic non-rigid registration. Then registration is used to transform and deform the NURBs to match the anatomy in the MR image. We show that our automated method generates XCAT Phantoms more robustly and significantly faster than the previous semi-automated method.
KEYWORDS: Magnetic resonance imaging, Image segmentation, Motion models, Data modeling, Single photon emission computed tomography, Image registration, Heart, 3D modeling, Affine motion model, Algorithm development
In SPECT imaging, patient respiratory and body motion can cause artifacts that degrade image quality. Developing and
evaluating motion correction algorithms are facilitated by simulation studies where a numerical phantom and its motion
are precisely known, from which image data can be produced. Previous techniques to test motion correction methods
generated XCAT phantoms modeled from MRI studies and motion tracking but required manually segmenting the major
structures within the whole upper torso, which can take 8 hours to perform. Additionally, segmentation in two
dimensional MRI slices and interpolating into three dimensional shapes can lead to appreciable interpolation artifacts as
well as requiring expert knowledge of human anatomy in order to identify the regions to be segmented within each slice.
We propose a new method that mitigates the long manual segmentation times for segmenting the upper torso. Our
interactive method requires that a user provide only an approximate alignment of the base anatomical shapes from the
XCAT model with an MRI data. Organ boundaries from aligned XCAT models are warped with displacement fields
generated from registering a baseline MR image to MR images acquired during pre-determined motions, which amounts
to automated segmentation each organ of interest. With our method we can show the quality of segmentation is equal
that of expert manual segmentation does not require a user who is an expert in anatomy, and can be completed in
minutes not hours. In some instances, due to interpolation artifacts, our method can generate higher quality models than
manual segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.