Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime
requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due
to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its
requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear
elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems,
and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing
work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear
finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing
the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element
operations. We employ a virtual coupling method for separating deformable body simulation and collision
detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation.
The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with
haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the
material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic
relaxation are required to improve the stability of the system.
We present on-going work on multi-resolution sulcal-separable meshing for approach-specific neurosurgery simulation, in
conjunction multi-grid and Total Lagrangian Explicit Dynamics finite elements. Conflicting requirements of interactive
nonlinear finite elements and small structures lead to a multi-grid framework. Implications for meshing are explicit control
over resolution, and prior knowledge of the intended neurosurgical approach and intended path. This information is used to
define a subvolume of clinical interest, within some distance of the path and the target pathology. Restricted to this
subvolume are a tetrahedralization of finer resolution, the representation of critical tissues, and sulcal separability
constraint for all mesh levels.
This paper presents on-going research that addresses uncertainty along the Z-axis in image-guided surgery, for
applications to large surgical workspaces, including those found in veterinary medicine. Veterinary medicine lags human
medicine in using image guidance, despite MR and CT data scanning of animals. The positional uncertainty of a surgical
tracking device can be modeled as an octahedron with one long axis coinciding with the depth axis of the sensor, where
the short axes are determined by pixel resolution and workspace dimensions. The further a 3D point is from this device,
the more elongated is this long axis, and the greater the uncertainty along Z of this point's position, in relation to its
components along X and Y. Moreover, for a triangulation-based tracker, its position error degrades with the square of
distance. Our approach is to use two or more Micron Trackers to communicate with each other, and combine this feature
with flexible positioning. Prior knowledge of the type of surgical procedure, and if applicable, the species of animal that
determines the scale of the workspace, would allow the surgeon to pre-operatively configure the trackers in the OR for
optimal accuracy. Our research also leverages the open-source Image-guided Surgery Toolkit (IGSTK).
For the pre-operative definition of a surgical workspace for Navigated Control<sup>®</sup> Functional Endoscopic Sinus
Surgery (FESS), we developed a semi-automatic image processing system. Based on observations of surgeons
using a manual system, we implemented a workflow-based engineering process that led us to the development of
a system reducing time and workload spent during the workspace definition. The system uses a feature based on
local curvature to align vertices of a polygonal outline along the bone structures defining the cavities of the inner
nose. An anisotropic morphologic operator was developed solve problems arising from artifacts from noise and
partial volume effects. We used time measurements and NASA's TLX questionnaire to evaluate our system.
We propose a method for estimating intrasurgical brain shift for image-guided surgery. This method consists of five stages: the identification of relevant anatomical surfaces within the MRI/CT volume, range-sensing of the skin and cortex in the OR, rigid registration of the skin range image with its MRI/CT homologue, non-rigid motion tracking over time of cortical range images, and lastly, interpolation of this surface displacement information over the whole brain volume via a realistically valued finite element model of the head. This paper focuses on the anatomical surface identification and cortical range surface tracking problems. The surface identification scheme implements a recent algorithm which imbeds 3D surface segmentation as the level- set of a 4D moving front. A by-product of this stage is a Euclidean distance and closest point map which is later exploited to speed up the rigid and non-rigid surface registration. The range-sensor uses both laser-based triangulation and defocusing techniques to produce a 2D range profile, and is linearly swept across the skin or cortical surface to produce a 3D range image. The surface registration technique is of the iterative closest point type, where each iteration benefits from looking up, rather than searching for, explicit closest point pairs. These explicit point pairs in turn are used in conjunction with a closed-form SVD-based rigid transformation computation and with fast recursive splines to make each rigid and non-rigid registration iteration essentially instantaneous. Our method is validated with a novel deformable brain-shaped phantom, made of Polyvinyl Alcohol Cryogel.
Image-guided surgery has evolved over the past 15 years from stereotactic planning, where the surgeon planned approaches to intracranial targets on the basis of 2D images presented on a simple workstation, to the use of sophisticated multi- modality 3D image integration in the operating room, with guidance being provided by mechanically, optically or electro-magnetically tracked probes or microscopes. In addition, sophisticated procedures such as thalamotomies and pallidotomies to relieve the symptoms of Parkinson's disease, are performed with the aid of volumetric atlases integrated with the 3D image data. Operations that are performed stereotactically, that is to say via a small burr- hole in the skull, are able to assume that the information contained in the pre-operative imaging study, accurately represents the brain morphology during the surgical procedure. On the other hand, preforming a procedure via an open craniotomy presents a problem. Not only does tissue shift when the operation begins, even the act of opening the skull can cause significant shift of the brain tissue due to the relief of intra-cranial pressure, or the effect of drugs. Means of tracking and correcting such shifts from an important part of the work in the field of image-guided surgery today. One approach has ben through the development of intra-operative MRI imaging systems. We describe an alternative approach which integrates intra-operative ultrasound with pre-operative MRI to track such changes in tissue morphology.