A 3D kinematic measurement of joint movement is crucial for orthopedic surgery assessment and diagnosis. This is usually obtained through a frame-by-frame registration of the 3D bone volume to a fluoroscopy video of the joint movement. The high cost of a high-quality fluoroscopy imaging system has hindered the access of many labs to this application. This is while the more affordable and low-dosage version, the mini C-arm, is not commonly used for this application due to low image quality. In this paper, we introduce a novel method for kinematic analysis of joint movement using the mini C-arm. In this method the bone of interest is recovered and isolated from the rest of the image using a non-rigid registration of an atlas to each frame. The 3D/2D registration is then performed using the weighted histogram of image gradients as an image feature. In our experiments, the registration error was 0.89 mm and 2.36° for human C2 vertebra. While the precision is still lacking behind a high quality fluoroscopy machine, it is a good starting point facilitating the use of mini C-arms for motion analysis making this application available to lower-budget environments. Moreover, the registration was highly resistant to the initial distance from the true registration, converging to the answer from anywhere within ±90° of it.
Antong Chen, Ashleigh Bone, Catherine D. Hines, Belma Dogdas, Tamara Montgomery, Maria Michener, Christopher Winkelmann, Soheil Ghafurian, Laura Lubbers, John Renger, Ansuman Bagchi, Jason Uslaner, Colena Johnson, Hatim Zariwala
Intracranial microdialysis is used for sampling neurochemicals and large peptides along with their metabolites from the interstitial fluid (ISF) of the brain. The ability to perform this in nonhuman primates (NHP) e.g., rhesus could improve the prediction of pharmacokinetic (PK) and pharmacodynamics (PD) action of drugs in human. However, microdialysis in rhesus brains is not as routinely performed as in rodents. One challenge is that the precise intracranial probe placement in NHP brains is difficult due to the richness of the anatomical structure and the variability of the size and shape of brains across animals. Also, a repeatable and reproducible ISF sampling from the same animal is highly desirable when combined with cognitive behaviors or other longitudinal study end points. Toward that end, we have developed a semi-automatic flexible neurosurgical method employing MR and CT imaging to (a) derive coordinates for permanent guide cannula placement in mid-brain structures and (b) fabricate a customized recording chamber to implant above the skull for enclosing and safeguarding access to the cannula for repeated experiments. In order to place the intracranial guide cannula in each subject, the entry points in the skull and the depth in the brain were derived using co-registered images acquired from MR and CT scans. The anterior/posterior (A/P) and medial-lateral (M/L) rotation in the pose of the animal was corrected in the 3D image to appropriately represent the pose used in the stereotactic frame. An array of implanted fiducial markers was used to transform stereotactic coordinates to the images. The recording chamber was custom fabricated using computer-aided design (CAD), such that it would fit the contours of the individual skull with minimum error. The chamber also helped in guiding the cannula through the entry points down a trajectory into the depth of the brain. We have validated our method in four animals and our results indicate average placement error of cannula to be 1.20 ± 0.68 mm of the targeted positions. The approach employed here for derivation of the coordinates, surgical implantation and post implant validation is built using traditional access to surgical and imaging methods without the necessity of intra-operative imaging. The validation of our method lends support to its wider application in most nonhuman primate laboratories with onsite MR and CT imaging capabilities.
Digitally reconstructed radiographs (DRR) are a simulation of radiographic images produced through a perspective projection of the three-dimensional (3D) image (volume) onto a two-dimensional (2D) image plane. The traditional method for the generation of DRRs, namely ray-casting, is a computationally intensive process and accounts for most of solution time in 3D/2D medical image registration frameworks, where a large number of DRRs is required. A few alternate methods for a faster DRR generation have been proposed, the most successful of which are based on the idea of pre-calculating the attenuation value of possible rays. Despite achieving good quality, these methods support a limited range of motion for the volume and entail long pre-calculation time. In this paper, we propose a new preprocessing procedure and data structure for the calculation of the ray attenuation values. This method supports all possible volume positions with practically small memory requirements in addition to reducing the complexity of the problem from O(n3) to O(n2). In our experiments, we generated DRRs of high quality in 63 milliseconds with a preprocessing time of 99.48 seconds and a memory size of 7.45 megabytes.
Intracranial delivery of recombinant DNA and neurochemical analysis in nonhuman primate (NHP) requires precise targeting of various brain structures via imaging derived coordinates in stereotactic surgeries. To attain targeting precision, the surgical planning needs to be done on preoperative three dimensional (3D) CT and/or MR images, in which the animals head is fixed in a pose identical to the pose during the stereotactic surgery. The matching of the image to the pose in the stereotactic frame can be done manually by detecting key anatomical landmarks on the 3D MR and CT images such as ear canal and ear bar zero position. This is not only time intensive but also prone to error due to the varying initial poses in the images which affects both the landmark detection and rotation estimation. We have introduced a fast, reproducible, and semi-automatic method to detect the stereotactic coordinate system in the image and correct the pose. The method begins with a rigid registration of the subject images to an atlas and proceeds to detect the anatomical landmarks through a sequence of optimization, deformable and multimodal registration algorithms. The results showed similar precision (maximum difference of 1.71 in average in-plane rotation) to a manual pose correction.
KEYWORDS: 3D modeling, Image registration, Bone, 3D image processing, Distance measurement, 3D image reconstruction, Computed tomography, Medical imaging, Image restoration, Video
Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to ±90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.