Image guided surgery (IGS) has led to significant advances in surgical procedures and outcomes. Endoscopic IGS is hindered, however, by the lack of suitable intraoperative scanning technology for registration with preoperative tomographic image data. This paper describes implementation of an endoscopic laser range scanner (eLRS) system for accurate, intraoperative mapping of the kidney surface, registration of the measured kidney surface with preoperative tomographic images, and interactive image-based surgical guidance for subsurface lesion targeting. The eLRS comprises a standard stereo endoscope coupled to a steerable laser, which scans a laser fan beam across the kidney surface, and a high-speed color camera, which records the laser-illuminated pixel locations on the kidney. Through calibrated triangulation, a dense set of 3-D surface coordinates are determined. At maximum resolution, the eLRS acquires over 300,000 surface points in less than 15 seconds. Lower resolution scans of 27,500 points are acquired in one second. Measurement accuracy of the eLRS, determined through scanning of reference planar and spherical phantoms, is estimated to be 0.38 ± 0.27 mm at a range of 2 to 6 cm. Registration of the scanned kidney surface with preoperative image data is achieved using a modified iterative closest point algorithm. Surgical guidance is provided through graphical overlay of the boundaries of subsurface lesions, vasculature, ducts, and other renal structures labeled in the CT or MR images, onto the eLRS camera image. Depth to these subsurface targets is also displayed. Proof of clinical feasibility has been established in an explanted perfused porcine kidney experiment.
A fully automated, anatomically-based procedure is developed for the coregistration of prone and supine scans in
computed tomographic colonography (CTC). Haustral folds, teniae coli and other anatomic landmarks are extracted
from the segmented colonic lumen and serve as the basis for iterative optimization-based matching of the colonic
surfaces. The three-dimensional coregistration is computed efficiently using a two-dimensional filet representation of the
colon. The circumferential positions of longitudinal structures such as teniae coli are used to estimate a rotational
prone-to-supine deformation, haustral folds give a longitudinal (stretching) deformation, while other landmarks and
anatomical considerations are used to constrain the allowable deformations. The proposed method is robust to changes in
the detected anatomical landmarks such as the obscuration or apparent bifurcation of teniae coli. Preliminary validation
in the Walter Reed CTC data set shows excellent coregistration accuracy-57 manually identified features (such as
polyps and diverticula) are automatically coregistered with a mean three-dimensional error of 16.4 mm. In phantom
studies, 210 fiducial pairs are coregistered to a mean three-dimensional error of 8.6 mm. The coregistration allows points
of interest in one scan to be automatically located in the other, leading to an expected improvement in per-patient read
time and a significant reduction in the cost of CTC.
We describe a technique to build a soft-walled colon phantom that provides realistic lumen anatomy in computed
tomography (CT) images. The technique begins with the geometry of a human colon measured during CT colonography
(CTC). The three-dimensional air-filled colonic lumen is segmented and then replicated using stereolithography (SLA).
The rigid SLA model includes large-scale features (e.g., haustral folds and tenia coli bands) down to small-scale features
(e.g., a small pedunculated polyp). Since the rigid model represents the internal air-filled volume, a highly-pliable
silicone polymer is painted onto the rigid model. This thin layer of silicone, when removed, becomes the colon wall.
Small 3 mm diameter glass beads are affixed to the outer wall. These glass beads show up with high intensity in CT
scans and provide a ground truth for evaluating performance of algorithms designed to register prone and supine CTC
data sets. After curing, the silicone colon wall is peeled off the rigid model. The resulting colon phantom is filled with
air and submerged in a water bath. CT images and intraluminal fly-through reconstructions from CTC scans of the colon
phantom are compared against patient data to demonstrate the ability of the phantom to simulate a human colon.
In image-guided surgery, discrete fiducials are used to determine a spatial registration between the location of surgical
tools in the operating theater and the location of targeted subsurface lesions and critical anatomic features depicted in
preoperative tomographic image data. However, the lack of readily localized anatomic landmarks has greatly hindered
the use of image-guided surgery in minimally invasive abdominal procedures.
To address these needs, we have previously described a laser-based system for localization of internal surface anatomy
using conventional laparoscopes. During a procedure, this system generates a digitized, three-dimensional
representation of visible anatomic surfaces in the abdominal cavity.
This paper presents the results of an experiment utilizing an ex-vivo bovine liver to assess subsurface targeting accuracy
achieved using our system. During the experiment, several radiopaque targets were inserted into the liver parenchyma.
The location of each target was recorded using an optically-tracked insertion probe. The liver surface was digitized
using our system, and registered with the liver surface extracted from post-procedure CT images. This surface-based
registration was then used to transform the position of the inserted targets into the CT image volume. The target
registration error (TRE) achieved using our surface-based registration (given a suitable registration algorithm
initialization) was 2.4 mm ± 1.0 mm. A comparable TRE (2.6 mm ± 1.7 mm) was obtained using a registration based on
traditional fiducial markers placed on the surface of the same liver. These results indicate the potential of fiducial-free,
surface-to-surface registration for image-guided lesion targeting in minimally invasive abdominal surgery.
Image-guided surgery has led to more accurate lesion targeting and improved outcomes in neurosurgery. However, adaptation of the technology to other forms of surgery has been slow largely due to difficulties in determining the position of anatomic landmarks within the surgical field. The ability to localize anatomic landmarks and provide real-time tracking of tissue motion without placing additional demands on the surgeon will facilitate image-guided surgery in a variety of clinical disciplines. Even approximate localization of anatomic landmarks would benefit many forms of surgery. For example, liver surgeons could visualize intraoperative locations on preoperative CT or MR scans to assist them in navigating through the complex hepatic vascular network. This paper describes the initial stages of development of an endoscopic localization system for use during minimally-invasive, image-guided abdominal surgery. The system projects a scanned laser beam through a conventional endoscope. The projected laser spot is then observed using a second endoscope orientated obliquely to the projecting endoscope. Knowledge of the optical geometry of the endoscopes, along with their relative positions in space, allows determination of the three-dimensional coordinates of the illuminated point. The ultimate accuracy of the system is dependent on the geometric relationship between the endoscopes, the ability to accurately measure the position of each endoscope, and careful calibration of the optics used to project the laser beam. We report a system design intended to support automated operation, methods and initial results of measurement of target points, and preliminary data characterizing the performance of the system.