Camera calibration is a key requirement for augmented reality in surgery. Calibration of laparoscopes provides two challenges that are not sufficiently addressed in the literature. In the case of stereo laparoscopes the small distance (less than 5mm) between the channels means that the calibration pattern is an order of magnitude more distant than the stereo separation. For laparoscopes in general, if an external tracking system is used, hand-eye calibration is difficult due to the long length of the laparoscope. Laparoscope intrinsic, stereo and hand-eye calibration all rely on accurate feature point selection and accurate estimation of the camera pose with respect to a calibration pattern. We compare 3 calibration patterns, chessboard, rings, and AprilTags. We measure the error in estimating the camera intrinsic parameters and the camera poses. Accuracy of camera pose estimation will determine the accuracy with which subsequent stereo or hand-eye calibration can be done. We compare the results of repeated real calibrations and simulations using idealised noise, to determine the expected accuracy of different methods and the sources of error. The results do indicate that feature detection based on rings is more accurate than a chessboard, however this doesn’t necessarily lead to a better calibration. Using a grid with identifiable tags enables detection of features nearer the image boundary, which may improve calibration.
Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can
reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours
reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video
with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider
population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models
by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we
present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos.
Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution
loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver
resections and 7 laparoscopic staging procedures, and evaluated using the Dice score.
Results: The CNN yielded segmentations with Dice scores ≥0.95 for the majority of images; however, the inter-patient
variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations:
minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological
liver tissue that mimics non-liver tissue appearance.
Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video,
but additional data or computational advances are necessary to address challenges due to the high inter-patient variability
in liver appearance.
Laparoscopic Ultrasound (LUS) is regularly used during laparoscopic liver resection to locate critical vascular
structures. Many tumours are iso-echoic, and registration to pre-operative CT or MR has been proposed as
a method of image guidance. However, factors such as abdominal insufflation, LUS probe compression and
breathing motion cause deformation of the liver, making this task far from trivial. Fortunately, within a smaller
local region of interest a rigid solution can suffice. Also, the respiratory cycle can be expected to be consistent.
Therefore, in this paper we propose a feature-based local rigid registration method to align tracked LUS data
with CT while compensating for breathing motion. The method employs the Levenberg-Marquardt Iterative
Closest Point (LMICP) algorithm, registers both on liver surface and vessels and requires two LUS datasets,
one for registration and another for breathing estimation. Breathing compensation is achieved by fitting a 1D
breathing model to the vessel points. We evaluate the algorithm by measuring the Target Registration Error
(TRE) of three manually selected landmarks of a single porcine subject. Breathing compensation improves
accuracy in 77% of the measurements. In the best case, TRE values below 3mm are obtained. We conclude that
our method can potentially correct for breathing motion without gated acquisition of LUS and be integrated in
the surgical workflow with an appropriate segmentation.
We present an analysis of the registration component of a proposed image guidance system for image guided liver surgery, using contrast enhanced CT. The analysis is performed on a visually realistic liver phantom and in-vivo porcine data. A robust registration process that can be deployed clinically is a key component of any image guided surgery system. It is also essential that the accuracy of the registration can be quantified and communicated to the surgeon. We summarise the proposed guidance system and discuss its clinical feasibility. The registration combines an intuitive manual alignment stage, surface reconstruction from a tracked stereo laparoscope and a rigid iterative closest point registration to register the intra-operative liver surface to the liver surface derived from CT. Testing of the system on a liver phantom shows that subsurface landmarks can be localised to an accuracy of 2.9 mm RMS. Testing during five porcine liver surgeries demonstrated that registration can be performed during surgery, with an error of less than 10 mm RMS for multiple surface landmarks.
Realistic modelling of mechanical interactions between tissues is an important part of surgical simulation, and
may become a valuable asset in surgical computer guidance. Unfortunately, it is also computationally very
demanding. Explicit matrix-free FEM solvers have been shown to be a good choice for fast tissue simulation,
however little work has been done on contact algorithms for such FEM solvers.
This work introduces such an algorithm that is capable of handling both deformable-deformable (soft-tissue interacting
with soft-tissue) and deformable-rigid (e.g. soft-tissue interacting with surgical instruments) contacts.
The proposed algorithm employs responses computed with a fully matrix-free, virtual node-based version of
the model first used by Taylor and Flanagan in PRONTO3D. For contact detection, a bounding-volume hierarchy
(BVH) capable of identifying self collisions is introduced. The proposed BVH generation and update
strategies comprise novel heuristics to minimise the number of bounding volumes visited in hierarchy update
and collision detection.
Aside from speed, stability was a major objective in the development of the algorithm, hence a novel method for
computation of response forces from C0-continuous normals, and a gradual application of response forces from
rate constraints has been devised and incorporated in the scheme. The continuity of the surface normals has
advantages particularly in applications such as sliding over irregular surfaces, which occurs, e.g., in simulated
The effectiveness of the scheme is demonstrated on a number of meshes derived from medical image data and
artificial test cases.
We present a segmentation algorithm using a statistical deformation model constructed from CT data of adult male pelves coupled to MRI appearance data. The algorithm allows the semi-automatic segmentation of bone for a limited population of MRI data sets. Our application is pelvic bone delineation from pre-operative MRI for image guided pelvic surgery. Specifically, we are developing image guidance for prostatectomies using the daVinci telemanipulator. Hence the use of male pelves only. The algorithm takes advantage of the high contrast of bone in CT data, allowing a robust shape model to be constructed relatively easily. This shape model can then be applied to a population of MRI data sets using a single data set that contains both CT and MRI data. The model is constructed automatically using fluid based non-rigid registration between a set of CT training images, followed by principal component analysis. MRI appearance data is imported using CT and MRI data from the same patient. Registration optimisation is performed using differential evolution. Based on our limited validation to date, the algorithm may outperform segmentation using non-rigid registration between MRI images without the use of shape data. The mean surface registration error achieved was 1.74 mm. The algorithm shows promise for use in segmentation of pelvic bone from MRI, though further refinement and validation is required. We envisage that the algorithm presented could be extended to allow the rapid creation of application specific models in various imaging modalities using a shape model based on CT data.