Dynamic Contrast Enhanced MRI (DCE-MRI) is being increasingly used as a method for studying the tumor
vasculature. It is also used as a biomarker to evaluate the response to anti-angiogenic therapies and the efficacy of a
therapy. The uptake of contrast in the tissue is analyzed using pharmacokinetic models for understanding the perfusion
characteristics and cell structure, which are indicative of tumor proliferation. However, in most of these 4D acquisitions
the time required for the complete scan are quite long as sufficient time must be allowed for the passage of contrast
medium from the vasculature to the tumor interstitium and subsequent extraction. Patient motion during such long scans
is one of the major challenges that hamper automated and robust quantification. A system that could automatically detect
if motion has occurred during the acquisition would be extremely beneficial. Patient motion observed during such 4D
acquisitions are often rapid shifts, probably due to involuntary actions such as coughing, sneezing, peristalsis, or jerk due
to discomfort. The detection of such abrupt motion would help to decide on a course of action for correction for motion
such as eliminating time frames affected by motion from analysis, or employing a registration algorithm, or even
considering the exam us unanalyzable. In this paper a new technique is proposed for effective detection of motion in 4D
medical scans by determination of the variation in the signal characteristics from multiple regions of interest across time.
This approach offers a robust, powerful, yet simple technique to detect motion.
This paper presents a feasibility and evaluation study for using 2D ultrasound in conjunction with our statistical deformable bone model in the scope of computer-assisted surgery (CAS). The final aim is to provide the surgeon with an enhanced 3D visualization for surgical navigation in orthopaedic surgery without the need for preoperative CT or MRI scans. We unified our earlier work to combine several automatic methods for statistical bone shape prediction from a sparse set of surface points, and ultrasound segmentation and calibration to provide the intended rapid and accurate visualization. We compared the use of a tracked digitizing pointer to ultrasound to acquire landmarks and bone surface points for the estimation of two cast proximal femurs, where two users performed the experiments 5-6 times per scenario. The concept of CT-based error introduced in the paper is used to give an approximate quantitative value to the best hoped-for prediction error, or lower-bound error, for a given anatomy. The conclusions of this work were that the pointer-based approach produced good results, and although the ultrasound-based approach performed considerably worse on average, there were several cases where the results were comparable to the pointer-based approach. It was determined that the primary factor for poor ultrasound performance was the inaccurate localization of the three initial landmarks, which are used for the statistical shape model.
Constructing anatomical shape from sparse information is a challenging task. A priori information is often required to handle this otherwise ill-posed problem. In this paper, the problem is formulated as a three-stage optimal estimation process using an a priori dense surface point distribution model (DS-PDM). The dense surface point distribution model itself is constructed from an already-aligned training shape set using Loop subdivision. It provides a dense and smooth description of all a priori training shapes. Its application in anatomical shape reconstruction facilitates all three stages as follows. The first stage, registration, is to iteratively estimate the scale and the 6-dimensional (6D) rigid registration transformation between the mean shape of DS-PDM and the input points using the iterative closest point (ICP) algorithm. Due to the dense description of the mean shape, a simple point-to-point distance is used to speed up the searching for closest point pairs. The second stage, morphing, optimally and robustly estimates a dense patient-specific template surface from DS-PDM using Mahalanobis distance based regularization. The estimated dense patient-specific template surface is then fed to the third stage, deformation, which uses a newly formularized kernel-based regularization to further reduce the reconstruction error. The proposed method is especially useful for accurate and stable surface reconstruction from sparse information when only a small number of a priori training shapes are available. It has been successfully tested on anatomical shape reconstruction of femoral heads using only dozens of sparse points, yielding very promising results.
The use of three dimensional models in planning and navigating computer assisted surgeries is now well established. These models provide intuitive visualization to the surgeons contributing to significantly better surgical outcomes. Models obtained from specifically acquired CT scans have the disadvantage that they induce high radiation dose to the patient. In this paper we propose a novel and stable method to construct a patient-specific model that provides an appropriate intra-operative 3D visualization without the need for a pre or intra-operative imaging. Patient specific data
consists of digitized landmarks and surface points that are obtained
intra-operatively. The 3D model is reconstructed by fitting a statistical deformable model to the minimal sparse digitized data. The statistical model is constructed using Principal Component Analysis from training objects. Our morphing scheme efficiently and accurately computes a Mahalanobis distance weighted least square fit of the deformable model to the 3D data model by solving a linear equation system. Relaxing the Mahalanobis distance term as additional points are incorporated enables our method to handle small and large sets of digitized points efficiently. Our novel incorporation of M-estimator based weighting of the digitized points
enables us to effectively reject outliers and compute stable models. Normalization of the input model data and the digitized points
makes our method size invariant and hence applicable directly to any
anatomical shape. The method also allows incorporation of non-spatial data such as patient height and weight. The predominant applications are hip and knee surgeries.
This paper addresses the problem of extrapolating extremely sparse three-dimensional set of digitized landmarks
and bone surface points to obtain a complete surface representation. The extrapolation is done using a statistical
principal component analysis (PCA) shape model similar to earlier approaches by Fleute et al. This extrapolation
procedure called Bone-Morphing is highly useful for intra-operative visualization of bone structures in image-free
surgeries. We developed a novel morphing scheme operating directly in the PCA shape space incorporating the
full set of possible variations including additional information such as patient height, weight and age. Shape
information coded by digitized points is iteratively removed from the PCA model. The extrapolated surface is
computed as the most probable surface in the shape space given the data. Interactivity is enhanced, as additional
bone surface points can be incorporated in real-time. The expected accuracy can be visualized at any stage of
the procedure. In a feasibility study, we applied the proposed scheme to the proximal femur structure. 14
CT scans were segmented and a sequence of correspondence establishing methods was employed to compute the
optimal PCA model. Three anatomical landmarks, the femoral notch and the upper and the lower trochanter are
digitized to register the model to the patient anatomy. Our experiments show that the overall shape information
can be captured fairly accurately by a small number of control points. The added advantage is that it is fast,
highly interactive and needs only a small number of points to be digitized intra-operatively.