Paper
16 March 2020 Data-driven detection and registration of spine surgery instrumentation in intraoperative images
Author Affiliations +
Abstract
Purpose. Conventional model-based 3D-2D registration algorithms can be challenged by limited capture range, model validity, and stringent intraoperative runtime requirements. In this work, a deep convolutional neural network was used to provide robust initialization of a registration algorithm (known-component registration, KC-Reg) for 3D localization of spine surgery implants, combining the speed and global support of data-driven approaches with the previously demonstrated accuracy of model-based registration. Methods. The approach uses a Faster R-CNN architecture to detect and localize a broad variety and orientation of spinal pedicle screws in clinical images. Training data were generated using projections from 17 clinical cone-beam CT scans and a library of screw models to simulate implants. Network output was processed to provide screw count and 2D poses. The network was tested on two test datasets of 2,000 images, each depicting real anatomy and realistic spine surgery instrumentation – one dataset involving the same patient data as in the training set (but with different screws, poses, image noise, and affine transformations) and one dataset with five patients unseen in the test data. Assessment of device detection was quantified in terms of accuracy and specificity, and localization accuracy was evaluated in terms of intersection-overunion (IOU) and distance between true and predicted bounding box coordinates. Results. The overall accuracy of pedicle screw detection was ~86.6% (85.3% for the same-patient dataset and 87.8% for the many-patient dataset), suggesting that the screw detection network performed reasonably well irrespective of disparate, complex anatomical backgrounds. The precision of screw detection was ~92.6% (95.0% and 90.2% for the respective same-patient and many-patient datasets). The accuracy of screw localization was within 1.5 mm (median difference of bounding box coordinates), and median IOU exceeded 0.85. For purposes of initializing a 3D-2D registration algorithm, the accuracy was observed to be well within the typical capture range of KC-Reg.1 Conclusions. Initial evaluation of network performance indicates sufficient accuracy to integrate with algorithms for implant registration, guidance, and verification in spine surgery. Such capability is of potential use in surgical navigation, robotic assistance, and data-intensive analysis of implant placement in large retrospective datasets. Future work includes correspondence of multiple views, 3D localization, screw classification, and expansion of the training dataset to a broader variety of anatomical sites, number of screws, and types of implants.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
S. A. Doerr, A. Uneri, Y. Huang, C. K. Jones, X. Zhang, M. D. Ketcha, P. A. Helm, and J. H. Siewerdsen "Data-driven detection and registration of spine surgery instrumentation in intraoperative images", Proc. SPIE 11315, Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, 113152P (16 March 2020); https://doi.org/10.1117/12.2550052
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image registration

3D modeling

Data modeling

Model-based design

Spine

Process modeling

Radiography

Back to Top