Shift of brain tissues during surgical procedures affects the precision of image-guided neurosurgery (IGNS). To improve the accuracy of the alignment between the patient and images, finite element model-based non-rigid registration methods have been investigated. The best prior estimate (BPE), the forced displacement method (FDM), the weighted basis solutions (WBS), and the adjoint equations method (AEM) are versions of this approach that have appeared in the literature. In this paper, we present a quantitative comparison study on a set of three patient cases. Three-dimensional displacement data from the surface and subsurface was extracted using the intra-operative ultrasound (iUS) and intraoperative stereovision (iSV). These data are then used as the "ground truth" in a quantitative study to evaluate the accuracy of estimates produced by the finite element models. Different types of clinical cases are presented, including distension and combination of sagging and distension. In each case, a comparison of the performance is made with the four methods. The AEM method which recovered 26-62% of surface brain motion and 20-43% of the subsurface deformation, produced the best fit between the measured data and the model estimates.
KEYWORDS: Brain, Surgery, Ultrasonography, Neuroimaging, Magnetic resonance imaging, Data modeling, Tissues, Human-machine interfaces, Finite element methods, Head
Image-guided neurosurgery typically relies on preoperative imaging information that is subject to errors resulting from brain shift and deformation in the OR. A graphical user interface (GUI) has been developed to facilitate the flow of data from OR to image volume in order to provide the neurosurgeon with updated views concurrent with surgery. Upon acquisition of registration data for patient position in the OR (using fiducial markers), the Matlab GUI displays ultrasound image overlays on patient specific, preoperative MR images. Registration matrices are also applied to patient-specific anatomical models used for image updating. After displaying the re-oriented brain model in OR coordinates and digitizing the edge of the craniotomy, gravitational sagging of the brain is simulated using the finite element method. Based on this model, interpolation to the resolution of the preoperative images is performed and re-displayed to the surgeon during the procedure. These steps were completed within reasonable time limits and the interface was relatively easy to use after a brief training period. The techniques described have been developed and used retrospectively prior to this study. Based on the work described here, these steps can now be accomplished in the operating room and provide near real-time feedback to the surgeon.
Patient registration, a key step in establishing image guidance, has to be performed in real-time after the patient is anesthetized in the operating room (OR) prior to surgery. We propose to use cortical vessels as landmarks for registering the preoperative images to the operating space. To accomplish this, we have attached a video camera to the optics of the operating microscope and acquired a pair of images by moving the scope. The stereo imaging system is calibrated to obtain both intrinsic and extrinsic camera parameters. During neurosurgery, right after opening of dura, a pair of stereo images is acquired. The 3-D locations of blood vessels are estimated via stereo vision techniques. The same series of vessels are localized in the preoperative image volume. From these 3-D coordinates, the transformation matrix between preoperative images and the operating space is estimated. Using a phantom, we have demonstrated that patient registration from cortical vessels is not only feasible but also more accurate than using conventional scalp-attached fiducials. The Fiducial Registration Error (FRE) has been reduced from 1 mm using implanted fiducials to 0.3 mm using cortical vessels. By replacing implanted fiducials with cortical features, we can automate the registration procedure and reduce invasiveness to the patient.
Microscope-based image-guided neurosurgery can be divided into three steps: calibration of the microscope optics; registration of the pre-operative images to the operating space; and tracking of the patient and microscope over time. Critical to this overall system is the temporal retention of accurate camera calibration. Classic calibration algorithms are routinely employed to find both intrinsic and extrinsic camera parameters. The accuracy of this calibration, however, is quickly compromised due to the complexity of the operating room, the long duration of a surgical procedure, and the inaccuracies in the tracking system. To compensate for the changing conditions, we have developed an adaptive procedure which responds to accruing registration error. The approach utilizes miniature fiducial markers implanted on the bony rim of the craniotomy site, which remain in the field of view of the operating microscope. A simple error function that enforces the registration of the known fiducial markers is used to update the extrinsic camera parameters. The error function is minimized using a gradient descent. This correction procedure reduces RMS registration errors for cortical features on the surface of the brain by an average of 72%, or 1.5 mm. These errors were reduced to less than 0.6 mm after each correction during the entire surgical procedure.
Image guided neurosurgery systems rely on rigid registration of the brain to preoperative images, not taking into account the displacement of brain tissue during surgery. Co-registered ultrasound appears to be a promising means of detecting tissue shift in the operating room. Although the use of ultrasound images alone may be insufficient for adequately describing intraoperative brain deformation, they could be used in conjunction with a computational model to predict full volume deformation. We rigorously test the assumption that co-registered ultrasound is an accurate source of sparse displacement data. Our co-registered ultrasound system is studied in both clinical applications as well as in a series of porcine experiments. Qualitative analysis of patient data indicates that ultrasound correctly depicts displaced tissue. The porcine studies demonstrate that features from co-registered ultrasound and CT or MR images are properly aligned to within approximately 1.7 mm. Tissue tracking in pigs suggests that the magnitude of displaced tissue may be more accurately predicted than the actual location of features. We conclude that co-registered ultrasound is capable of detecting brain tissue shift, and that incorporating displacement data into computational model appears feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.