|
1.IntroductionDespite the fact that robot-assisted laparoscopic surgery can provide value in many surgical applications (e.g., gynecologic,1 liver,2 and rectal3 surgery), it is still most commonly used for the surgical management of prostate cancer.4 In addition to the removal of the primary prostate cancer (prostatectomy), such procedures are often accompanied by a form of lymph node dissection focused on the removal of lymphatic metastases, e.g., a (extended) lymphadenectomy, a sentinel node (SN) procedure,5 or, in the future, a prostate-specific membrane antigen-targeted nodal dissection.6 It is widely accepted that image guidance toward the (possible) location of prostate cancer metastases may help improve surgical accuracy and therefore reduce the procedure-associated side effects.7 It is for this reason that the da Vinci robotic platform (Intuitive Surgical Inc., Sunnyvale, California) is now routinely equipped with a near-infrared (NIR) fluorescence laparoscope (Firefly, Intuitive Surgical Inc.). The concept here is that fully integrated visual identification of the target, using fluorescence guidance, helps the urologist to interpret the surgical field and, as such, improves surgical guidance. Unfortunately, fluorescence guidance has a very limited in-depth signal penetration due to absorption and scatter of the fluorescence excitation and emission light by tissue (roughly ).8,9 This makes the technology lose value when lesions are located at unknown locations hidden deep within the patient’s anatomy. We underlined this shortcoming during robot-assisted SN biopsy procedures that made use of a tracer that is both radio- and fluorescence-labeled [indocyanine green ].10,11 These studies illustrated that, dependent on the camera system, up to 19.6% of the SNs were missed during the urologist’s fluorescence-based inspection of the surgical field. Here, insight with regard to the tracer location, provided by the preoperative single-photon emission computed tomography/X-ray computed tomography (SPECT/CT) images, enabled the urologist to also resect the SNs not directly visible using fluorescence imaging. Ex vivo fluorescence imaging then confirmed the presence of the fluorescence signal in the resected SNs. Based on the preoperatively acquired imaging information, surgeons can cognitively plan their route toward target structures [e.g., tumorous tissue or (tumor positive) lymph nodes] and around delicate structures (e.g., blood vessels, nerves, and/or ureters). Unfortunately, applying this information in the operating room (OR) remains challenging. The link between using pre- and intraoperative imaging methods may be strengthened using surgical navigation (via e.g., optical patient tracking), allowing a technological integration of pre- and intraoperative findings. Applying this technology to soft tissue environments is challenging due to tissue shift and deformation. Here, movement after acquisition of the patient scan(s) poses the biggest problem.12–15 Such deformations can only be corrected using intraoperative imaging findings and/or reference marker points placed at the tissue of interest. For example, a radioactive readout can be used to confirm the navigation accuracy or even provide an intraoperative freehand SPECT (fhSPECT) scan that allows for navigation in a three-dimensional (3-D) “snapshot” map generated in the OR setting.16–19 Alternative to (or in conjunction with) using a radioactive signature for intraoperative confirmation of the navigation accuracy, validation in the form of fluorescence imaging can be used.20,21 In this study, we describe the integration of the above features in a robotic setup using (human-like) phantoms. For this, we used both SPECT/CT and fhSPECT findings as the basis for the navigation process. Additionally, we have integrated these data sets into the laparoscopic video feed using an augmented reality overlay. Such a “hybrid navigator” concept also means that in the future, different pre- and intraoperative imaging sources can be integrated in an interactive augmented reality view that can be fed into the console of the operating urologist. 2.Methods and Materials2.1.Navigated Fluorescence Laparoscope SetupThe complete navigation setup consisted of a combination of the Firefly laparoscope, integrated with the da Vinci® Si surgical robot, and the declipseSPECT navigation system (SurgicEye GmbH, Munich, Germany) [Fig. 1(a)] with integrated NIR optical tracking system (OTS; Polaris Vicra, Northern Digital Inc., Waterloo, Canada), which provides a tracking accuracy [95% confidence interval; 0.25 mm root mean square (RMS)22]. Different reference targets were used to optically track the 3-D position and orientation (3-D pose) of objects, such as the laparoscope, gamma probes, and the different phantom setups. All reference targets contained a unique asymmetric geometry of at least three fiducials visible to the OTS. To allow for the navigation device to describe all the different objects (e.g., surgical target and laparoscope) in the surgical workflow, all objects had to be connected by placing them in a common coordinate system provided by the OTS.23 Transforming a 3-D object pose from one coordinate system to another was achieved using transformation matrices. An overview of the different transformations needed in this setup is illustrated in Fig. 1. Estimation of these transformations is described in more detail throughout the text below. 2.2.Single-Photon Emission Computed Tomography/X-Ray Computed TomographyBefore acquisition of the SPECT/CT scan, a three-fiducial phantom reference target (PRT) was fitted to the exterior of the phantom setups, allowing for optical tracking by the OTS. It remained at a fixed position with respect to the phantom throughout the experiments. Next to its visibility to the OTS, this PRT is also clearly distinguishable in the CT imaging data, allowing the navigation system to segment it from the CT imaging data and calculate the transformation from the CT coordinate system to the PRT coordinate system20 [Fig. 1(b)]. Since the SPECT/CT imaging system (Dual Head Symbia TruePoint SPECT/CT, Siemens Healthcare GmbH, Erlangen, Germany) was a combined apparatus, the registration between the CT and SPECT imaging data itself is provided in the digital imaging and communications in medicine tags of the 3-D images. This allows for the connection between the SPECT imaging data and the CT coordinate system provided by the transformation matrix . 2.3.Freehand Single-Photon Emission Computed TomographyMultiple fhSPECT scans were acquired using either an SOE 311 gamma probe with a Europrobe 3 control unit (Eurorad S.A., Eckbolsheim, France) or a HiSens gamma probe with a, SG03 control unit (Crystal Photonics GmbH, Berlin, Germany). A four-fiducial gamma probe reference target (GPRT) was fitted to allow tracking of these modalities by the navigation system. Scanning times varied between 2 and 3 min. Placement of the PRT was the same as for SPECT/CT-based navigation. Using both the PRT and the GPRT during the fhSPECT acquisition, the 3-D fhSPECT scanning data can be placed in the navigation workflow by linking it to the PRT via the transformation matrix , provided by the navigation system itself. 2.4.Fluorescence LaparoscopeDuring the navigation experiments, both the standard laparoscope white-light setting and the fluorescence light setting were used. To connect the laparoscope video feed to the navigation system, an Epiphan frame grabber (DVI2PCIe, Epiphan Systems Inc., Ottawa, Ontario, Canada) was integrated into the navigation cart. Either the laparoscope processed video feed was connected directly from the digital visual interface (DVI) output at the back of the surgical robot vision cart or the laparoscope raw video feed was connected from the component output on the back of the camera console. To track the 3-D pose of the laparoscope with the navigation system, a three-fiducial reference target was attached to the camera housing [laparoscope reference target (LRT)] [see Fig. 1(b)]. The optical wavelengths used for object tracking and fluorescence emission partly overlapped, being both in the 800- to 900-nm range. Nevertheless, interference issues were not expected during intra-abdominal use of the laparoscope. An adapted version of the declipseSPECT 6.0 (SurgicEye GmbH) software was used to incorporate the laparoscope video feed in both calibration and navigation workflows. Two steps of calibrations were performed with the fluorescence laparoscope to allow proper use in the navigation workflow:
2.5.Soft Tissue PhantomThe soft tissue phantom setup [Fig. 2(a)] consisted of a tray with the PRT placed on a stand and a silicone half sphere with four Eppendorf tubes (Eppendorf AG, Hamburg, Germany) placed inside. The silicone half sphere structure, with a diameter of at its base and a height of , was cast from Dragon Skin® FX-Pro silicone rubber (Smooth-On Inc., Macungie, Pennsylvania), with addition of Thi-Vex® Silicone Thickener (Smooth-On Inc.) for the skin part and Slacker® Tactile Mutator (Smooth-On Inc.) for the softer fat part. SilTone pigments (FormX, Amsterdam, The Netherlands) were used to color the rubber. Two Eppendorf tubes served as the navigation targets, to be scanned with (fh)SPECT imaging, and were placed at roughly 5 cm apart. With a volume of , the Eppendorf tubes should resemble a typical lymph node size found in lymph node dissection for prostate cancer; KleinJan et al.25 reported a range of 80 to , with a median of . These tubes were filled with a mixture of ICG and of an ICG solution and of a solution (). The ICG solutions used, in this study, consisted of ICG (ICG-Pulsion, Pulsion Medical Systems, Munich, Germany) dissolved in human serum albumin (Albuman , Sanquin, Amsterdam, The Netherlands). With this as stock solution, the effective ICG solutions used in the phantom experiments range from 9.62 to . This quantity was chosen based on both clinical relevance (0 to typically found in lymph nodes during prostatectomy25) and best visibility during fluorescence guidance (best visibility found between 2.44 and 26). The solutions used, in this study, were obtained from a pertechnetate solution ( saline, Technekow, Mallinckrodt Medical BV, Petten, The Netherlands). Two additional Eppendorf tubes were hidden on the sides of the tissue mimicking structure to serve as “anatomical reference” in the CT images. These tubes were filled with CT contrast agent [Ultravist (Bayer AG, Leverkusen, Germany) 50% diluted with demineralized water]. To constrain movement of all the objects in the phantom setup, and therefore constrain the navigation error, the silicone structure and the PRT stand were glued to a silicone bottom layer in the tray. The PRT itself was taped to the stand. Note however, to allow easy access to the target sources, the precut phantoms were opened with a wound spreader during navigation, a manipulation that could have possibly introduced a (minor) deformation error to the navigation setup. 2.6.Laparoscopic Torso PhantomThe human-like laparoscopic torso phantom was made out of a standard life-sized anatomical model of the human skeleton, generally used for educational purposes. A transparent plastic mannequin was split in two parts over the coronal plane and the skeleton was cut to fit inside. The skeleton was fixed after which a silicone model of the prostate and three lymph nodes were incorporated, thereby simulating the SN biopsy procedure during robot-assisted laparoscopic prostatectomy.10,11,27 Both the prostate model and the lymph node models were cast from Dragon Skin® FX-Pro silicone rubber and Slacker® Tactile Mutator using the color pigments for different colors. Mixtures of ICG and were then inserted into the prostate and SN models. For the prostate model, a 1.5-mL Eppendorf tube was incorporated in the cast and was used to store a tube filled with of the ICG solution and of the solution (). The lymph node phantoms contained a tube with a mixture of of the ICG solution and of the solution (). This resulted in a ratio in radioactivity for the prostate model with respect to each of the individual lymph nodes, which should be a reasonable ratio in SN mapping.28–31 Four 12-mm trocars were placed in the transparent shell of the torso phantom to allow docking of the da Vinci robot, including the fluorescence laparoscope. A PRT was fixed at the sternum location of the phantom. 2.7.Evaluation Navigation AccuracyThe total navigational accuracy was evaluated in the soft tissue phantom setups and divided in both coronal and sagittal errors. The laparoscope was placed in a stand vertically with respect to the navigation target at a distance of roughly 5 cm. The navigation accuracy determination was then performed manually by comparing the target distance as given by the navigation system to the distance found with a ruler (measurement precision of ), in a manner similar to that described by Brouwer et al.16 For the sagittal plane, the distance from laparoscope tip to target was indicated by the navigation system in numbers (precision of 1 mm). The coronal distance was indicated with the augmented reality overlay over the laparoscopic video feed, consisting of a green target point and a scalable purple/blue cloud of activity around it. The green target point provided a clear reference for the accuracy determination and was compared to the actual target as seen on the video. Since the navigation targets consisted of small Eppendorf tubes, the midpoint of the fluid was used for the measurements. For each navigation setup (SPECT/CT and fhSPECT), three different soft tissue phantoms were used with each their own scans, each containing two distinct targets ( over three phantoms). Each individual measurement was performed by two different observers, resulting in 24 measurements in total. IBM SPSS statistics 22 software (International Business Machines Corp., New York, USA) was used to compare if the accuracy of SPECT/CT- and fhSPECT-based navigation was significantly different using an unpaired -test (95% confidence interval). 2.8.Overview of Coordinate Frames and Calibration StepsAn overview of the different coordinate transformations applied for the navigation workflow is provided in Fig. 1. All transformations are divided in three different colors (red, blue, and green) to distinguish how they are found. The transformations determined by the OTS in real time are shown in red, the ones calculated during registration or calibration procedures are shown in blue, and all transformations deduced from known geometries are shown in green. 3.ResultsFigure 2 shows an example of the navigation process performed using preoperative SPECT/CT; Figs. 2(b) to 2(f) show snapshots of the Firefly video feed combining the laparoscopic view (in white-light and fluorescence modus) and an augmented overlay of the SPECT/CT data. The navigation targets, displayed as dots, are defined by the signal hotspots found in the SPECT imaging data and the two white Eppendorf-shaped signals in the CT view [Figs. 2(a), 2(c), and 2(d)] function as anatomical markers. In a similar manner, we could also use “intraoperative” fhSPECT data sets for navigation (see Fig. 3). In the laparoscopic navigation view, the CT and/or (fh)SPECT overlay could be turned on and off independent of each other and the size of the SPECT hotspots (threshold on the SPECT signal) could be scaled according to preference. Qualitatively, navigation to the different targets (depicted as green colored dots) in the soft tissue phantoms appeared accurate using both scan modalities; during the navigation process, the augmented hotspots in the video feed and the distance to the targets seemed well registered to the fluorescent findings. As mentioned in Sec. 2.4, optical wavelengths of the OTS and fluorescence emission overlapped. However, other than in previous studies,21 the NIR light of the OTS did not interfere with the fluorescence detection of the Firefly. The navigation accuracy for the SPECT/CT- and fhSPECT-based navigation procedures was quantified (see Table 1). Average errors for the SPECT/CT-based navigation were 2.25 mm (median 2 mm) and 2.08 mm (median 2 mm) in the coronal and sagittal plane, respectively. For the fhSPECT-based navigation, these were quite similar: 1.92 mm (median 2 mm) and 2.83 mm (median 3 mm). These results underline that the navigation accuracy, in the setup studied, stays well below the 1-cm tissue limit needed for successful fluorescence detection.8,9 Comparison of the errors found for both scan modalities did not result in a significant difference ( values of 0.764 and 0.282 for the coronal and sagittal plane, respectively). Table 1Overview of the accuracy measurements in the tissue-mimicking phantom setups for navigation based on both SPECT/CT and fhSPECT.
Evaluation of the navigation setup using a torso phantom (Fig. 4) illustrates the clinical setup to which the navigation should be translated. In addition, it demonstrates the setup remained effective when the robot is fully docked. 4.DiscussionWe have integrated surgical navigation, made possible via optical tracking, with (robot-assisted) laparoscopic fluorescence-guided surgery. Hereby, a robot-docked fluorescence laparoscope could be successfully navigated to target structures located in two different phantom setups with an accuracy of . All different imaging modalities used (fluorescence, SPECT/CT, and fhSPECT) could be available during navigation, thereby allowing compensation for the weaknesses of the individual modalities (e.g., the limited tissue penetration of fluorescent signals or the inaccuracies of surgical navigation due to soft tissue deformations). The presented soft tissue navigation setup suggests a next step in providing surgeons with more precise orientation and localization during robot-assisted procedures. Theoretically, a preoperatively acquired SPECT/CT scan can be considered more precise than an intraoperatively acquired fhSPECT; with SPECT/CT, a lot more information is collected for the 3-D image reconstruction due to larger gamma cameras and longer scan times.18 Therefore, for a rigid phantom setup, one would expect the navigation errors found when navigation was based on SPECT/CT to be smaller than when navigation was based on fhSPECT. In our measurements, performed on the semirigid soft tissue phantoms, navigational accuracy was similar for both the two scan methods. This suggests that navigation based on fhSPECT may serve as a valuable and cost-effective alternative for navigation based on SPECT/CT. Moreover, due to the intraoperative nature of fhSPECT imaging (e.g., using a drop-in gamma probe31,32), the technique should suffer much less from tissue deformations that result from patient movement. In fact, due to the short acquisition time (roughly between 2 and 3 min), one could easily acquire a new scan after any expected tissue deformation or displacement (e.g., removal of the primary tumor site). In addition, intraoperative real-time visualization of the fluorescence can be used to compensate for navigational errors below . Translation of the proposed navigation setup to clinical use should be straightforward, as the robotic setup remains identical. We do see two questions that require attention, depending on the specific clinical application chosen: (1) Will NIR optical tracking be sufficient for a dynamic robotic intervention? (2) What will be the range of tissue deformation found in patients and how does this relate to the fluorescence detection limit? Regarding the first point, in this study, optical tracking was used to define the 3-D pose of the phantom setup, laparoscope, and gamma probe using three different reference targets containing at least three fiducials each. As shown, this was feasible, but a general limiting factor was that the OTS has to maintain a direct line-of-sight to at least three of the reference target fiducials to allow for 3-D pose determination of the tracked object.23 A feature that limits the OR layout and logistics. This may be overcome by increasing the number of fiducials per reference target or by using multiple OTS cameras on different sides of the OR.33 A possible drawback of using NIR optical tracking in combination with NIR fluorescence imaging is the overlap in spectra used for object tracking and fluorescence imaging. In previous work, we found that this could be a considerable issue when using an open surgery fluorescence camera.21 To our surprise, we did not detect such an extensive interference when the Firefly laparoscope was applied outside of the phantom body. This finding could provide a basis for solving the issues previously reported. This said, in a laparoscopic setting, the tracking light is not likely to interfere with the fluorescence detection, since fluorescence imaging is conducted inside the patient’s body. Alternative to NIR optical tracking, electromagnetic (EM) tracking or mechanical tracking could also provide outcome. With an intrinsic accuracy up to (RMS) for EM tracking versus (RMS) for optical tracking,22,34 EM tracking still seems quite promising. However, a major disadvantage with this technique is the susceptibility to distortions of the EM tracking field by nearby metal objects or EM interferences, rendering it much less accurate.35 Although inaccurate, mechanical tracking of the robotic arms is possible by using forward kinematics to calculate the 3-D pose of the end effector such as the laparoscope or the surgical instrument.36 Fuerst et al.32 greatly improve this accuracy to an approximate error of 0.2 mm by combining the mechanical tracking with visual-based tracking via the stereoscopic laparoscope video feed. In the presented navigation workflow, the navigation targets are displayed as a two-dimensional augmented reality overlay superimposed in the laparoscopic video feed while the distance from the laparoscope tip to target is shown in numbers. When implemented in clinical use, it might be hard to get a good idea of how deep the target structures really lie. Therefore, the augmented reality visualization may in the future be enhanced by dynamic augmented cues (e.g., small arrows indicating both direction and distance) or more depth perception cues (e.g., as shown by Bichlmeier et al.37 and Kutter et al.38). AcknowledgmentsThis work was partially supported by two grants from the European Research Council under the European Union’s Seventh Framework Program (FP7/2007-2013, Grant Nos. 2012-306890 and 2012-323105), a Eurostars grant (Hybrid Navigator; Grant No. E! 7555), and an NWO-STW-VIDI grant (Grant No. STW BGT11272). Material support for this work was provided by SurgicEye (SurgicEye GmbH, Munich, Germany), allowing the authors to run their prototype software on the declipseSPECT navigation system, and by Intuitive Surgical (Intuitive Surgical Inc., Sunnyvale, California), supplying the authors with a stand-alone Firefly fluorescence laparoscope system for initial laboratory experiments. The authors would also like to thank Petra Dibbets-Schneider from the section Nuclear Medicine at the Department of Radiology, Leiden University Medical Center (Leiden, the Netherlands), for assisting in acquisition of the SPECT/CT scans. ReferencesE. E. Medlin, D. M. Kushner and L. Barroilhet,
“Robotic surgery for early stage cervical cancer: evolution and current trends,”
J. Surg. Oncol., 112
(7), 772
–781
(2015). http://dx.doi.org/10.1002/jso.24008 JSONAU 0022-4790 Google Scholar
R. Montalti et al.,
“Outcomes of robotic vs laparoscopic hepatectomy: a systematic review and meta-analysis,”
World J. Gastroenterol., 21
(27), 8441
–8451
(2015). http://dx.doi.org/10.3748/wjg.v21.i27.8441 Google Scholar
R. Biffi et al.,
“Dealing with robot-assisted surgery for rectal cancer: current status and perspectives,”
World J. Gastroenterol., 22
(2), 546
–556
(2016). http://dx.doi.org/10.3748/wjg.v22.i2.546 Google Scholar
G. Novara et al.,
“Systematic review and meta-analysis of studies reporting oncologic outcome after robot-assisted radical prostatectomy,”
Eur. Urol., 62
(3), 382
–404
(2012). http://dx.doi.org/10.1016/j.eururo.2012.05.047 EUURAV 0302-2838 Google Scholar
N. S. van den Berg et al.,
“Sentinel lymph node biopsy for prostate cancer: a hybrid approach,”
J. Nucl. Med., 54
(4), 493
–496
(2013). http://dx.doi.org/10.2967/jnumed.112.113746 JNMEAQ 0161-5505 Google Scholar
T. Maurer et al.,
“Prostate-specific membrane antigen-radioguided surgery for metastatic lymph nodes in prostate cancer,”
Eur. Urol., 68 530
–534
(2015). http://dx.doi.org/10.1016/j.eururo.2015.04.034 EUURAV 0302-2838 Google Scholar
F. Greco et al.,
“Current perspectives in the use of molecular imaging to target surgical treatments for genitourinary cancers,”
Eur. Urol., 65
(5), 947
–964
(2014). http://dx.doi.org/10.1016/j.eururo.2013.07.033 EUURAV 0302-2838 Google Scholar
P. T. Chin et al.,
“Multispectral visualization of surgical safety-margins using fluorescent marker seeds,”
Am. J. Nucl. Med. Mol. Imaging, 2
(2), 151
–162
(2012). Google Scholar
F. W. van Leeuwen, J. C. Hardwick and A. R. van Erkel,
“Luminescence-based imaging approaches in the field of interventional molecular imaging,”
Radiology, 276
(1), 12
–29
(2015). http://dx.doi.org/10.1148/radiol.2015132698 RADLAX 0033-8419 Google Scholar
G. H. KleinJan et al.,
“Optimisation of fluorescence guidance during robot-assisted laparoscopic sentinel node biopsy for prostate cancer,”
Eur. Urol., 66
(6), 991
–998
(2014). http://dx.doi.org/10.1016/j.eururo.2014.07.014 EUURAV 0302-2838 Google Scholar
G. H. KleinJan et al.,
“Multimodal hybrid imaging agents for sentinel node mapping as a means to (re)connect nuclear medicine to advances made in robot-assisted surgery,”
Eur. J. Nucl. Med. Mol. Imaging, 43
(7), 1278
–1287
(2016). http://dx.doi.org/10.1007/s00259-015-3292-2 Google Scholar
M. Baumhauer et al.,
“Navigation in endoscopic soft tissue surgery: perspectives and limitations,”
J. Endourol., 22
(4), 751
–766
(2008). http://dx.doi.org/10.1089/end.2007.9827 Google Scholar
J. Rassweiler et al.,
“Surgical navigation in urology: European perspective,”
Curr. Opin. Urol., 24
(1), 81
–97
(2014). http://dx.doi.org/10.1097/MOU.0000000000000014 CUOUEQ 0963-0643 Google Scholar
F. Nickel et al.,
“Navigation system for minimally invasive esophagectomy: experimental study in a porcine model,”
Surg. Endosc., 27
(10), 3663
–3670
(2013). http://dx.doi.org/10.1007/s00464-013-2941-4 Google Scholar
N. C. Buchs et al.,
“Augmented environments for the targeting of hepatic lesions during image-guided robotic liver surgery,”
J. Surg. Res., 184
(2), 825
–831
(2013). http://dx.doi.org/10.1016/j.jss.2013.04.032 JSGRA2 0022-4804 Google Scholar
O. R. Brouwer et al.,
“Feasibility of intraoperative navigation to the sentinel node in the groin using preoperatively acquired single photon emission computerized tomography data: transferring functional imaging to the operating room,”
J. Urol., 192
(6), 1810
–1816
(2014). http://dx.doi.org/10.1016/j.juro.2014.03.127 Google Scholar
T. Wendler et al.,
“First demonstration of 3-D lymphatic mapping in breast cancer using freehand SPECT,”
Eur. J. Nucl. Med. Mol. Imaging, 37
(8), 1452
–1461
(2010). http://dx.doi.org/10.1007/s00259-010-1430-4 Google Scholar
T. Engelen et al.,
“The next evolution in radioguided surgery: breast cancer related sentinel node localization using a freehandSPECT-mobile gamma camera combination,”
Am. J. Nucl. Med. Mol. Imaging, 5
(3), 233
–245
(2015). Google Scholar
C. Bluemel et al.,
“Intraoperative 3-D imaging improves sentinel lymph node biopsy in oral cancer,”
Eur. J. Nucl. Med. Mol. Imaging, 41
(12), 2257
–2264
(2014). http://dx.doi.org/10.1007/s00259-014-2870-z Google Scholar
O. R. Brouwer et al.,
“Image navigation as a means to expand the boundaries of fluorescence-guided surgery,”
Phys. Med. Biol., 57
(10), 3123
–3136
(2012). http://dx.doi.org/10.1088/0031-9155/57/10/3123 PHMBA7 0031-9155 Google Scholar
G. H. KleinJan,
“Towards (hybrid) navigation of a fluorescence camera in an open surgery setting,”
J. Nucl. Med.,
(2016). http://dx.doi.org/10.2967/jnumed.115.171645 Google Scholar
, “Northern digital, polaris optical tracking systems,”
(2016) http://www.ndigital.com/medical/products/polaris-family/ August 2016). Google Scholar
P. Waelkens et al.,
“Surgical navigation: an overview of the state-of-the-art clinical applications,”
Radioguided Surgery, 57
–73 Springer(2016). Google Scholar
Z. Zhang,
“A flexible new technique for camera calibration,”
IEEE Trans. Pattern Anal. Mach. Intell., 22
(11), 1330
–1334
(2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar
G. H. KleinJan et al.,
“Fluorescence guided surgery and tracer-dose, fact or fiction?,”
Eur. J. Nucl. Med. Mol. Imaging, 43
(10), 1857
–1867
(2016). http://dx.doi.org/10.1007/s00259-016-3372-y Google Scholar
N. S. van den Berg et al.,
“(Near-infrared) fluorescence-guided surgery under ambient light conditions: a next step to embedment of the technology in clinical routine,”
Ann. Surg. Oncol., 23
(8), 2586
–2595
(2016). http://dx.doi.org/10.1245/s10434-016-5186-3 Google Scholar
H. G. van der Poel et al.,
“Intraoperative laparoscopic fluorescence guidance to the sentinel lymph node in prostate cancer patients: clinical proof of concept of an integrated functional imaging approach using a multimodal tracer,”
Eur. Urol., 60
(4), 826
–833
(2011). http://dx.doi.org/10.1016/j.eururo.2011.03.024 EUURAV 0302-2838 Google Scholar
L. Bergqvist, S. E. Strand and B. R. Persson,
“Particle sizing and biokinetics of interstitial lymphoscintigraphic agents,”
Sem. Nucl. Med., 13
(1), 9
–19
(1983). http://dx.doi.org/10.1016/S0001-2998(83)80031-2 SMNMAB 0001-2998 Google Scholar
B. A. Kapteijn et al.,
“Validation of gamma probe detection of the sentinel node in melanoma,”
J. Nucl. Med., 38
(3), 362
–366
(1997). JNMEAQ 0161-5505 Google Scholar
L. Jansen et al.,
“Improved staging of breast cancer through lymphatic mapping and sentinel node biopsy,”
Eur. J. Surg. Oncol., 24
(5), 445
–446
(1998). http://dx.doi.org/10.1016/S0748-7983(98)92496-9 Google Scholar
M. N. van Oosterom et al.,
“Revolutionizing (robot-assisted) laparoscopic gamma tracing using a drop-in gamma probe technology,”
Am. J. Nucl. Med. Mol. Imaging, 6
(1), 1
(2016). Google Scholar
B. Fuerst et al.,
“First robotic SPECT for minimally invasive sentinel lymph node mapping,”
IEEE Trans. Med. Imaging, 35 830
–838
(2015). http://dx.doi.org/10.1109/TMI.2015.2498125 Google Scholar
T. Sielhorst et al.,
“Online estimation of the target registration error for n-ocular optical tracking systems,”
in Medical Image Computing and Computer-Assisted Intervention (MICCAI 2007),
652
–659
(2007). Google Scholar
, “Northern digital, aurora electromagnetic tracking systems,”
(2016) http://www.ndigital.com/medical/products/aurora/ August ). 2016). Google Scholar
H. Kenngott et al.,
“Magnetic tracking in the operation room using the da Vinci® telemanipulator is feasible,”
J. Rob. Surg., 7
(1), 59
–64
(2013). http://dx.doi.org/10.1007/s11701-012-0347-2 Google Scholar
A. Reiter, P. K. Allen and T. Zhao,
“Appearance learning for 3D tracking of robotic surgical tools,”
Int. J. Rob. Res., 33
(2), 342356
(2014). http://dx.doi.org/10.1177/0278364913507796 IJRREL 0278-3649 Google Scholar
C. Bichlmeier et al.,
“Contextual anatomic mimesis hybrid in-situ visualization method for improving multi-sensory depth perception in medical augmented reality,”
in 6th IEEE and ACM Int. Symp. on Mixed and Augmented Reality (ISMAR 2007),
129
–138
(2007). http://dx.doi.org/10.1109/ISMAR.2007.4538837 Google Scholar
O. Kutter et al.,
“Real-time volume rendering for high quality visualization in augmented reality,”
in Int. Workshop on Augmented Environments for Medical Imaging Including Augmented Reality in Computer-Aided Surgery (AMI-ARCS),
(2008). Google Scholar
|