You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
17 February 2020Localization and spatially anchoring objects of an augmented reality headset using an onboard fisheye lens
The development of improved Augmented Reality (AR) Head-Mounted Devices (HMDs) have led to increasing use cases for AR applications. In the case of surgery, an HMD can be used as an assistive tool to help surgeons operate. With a triplanar surgical navigation system as an industry standard, the use of an HMD can improve the surgeon’s comfort, and overall experience. An HMD can offer the surgeon a consistent flow of information in front of their eyes with medically relevant images, such as craniospinal computed tomography (CT) data that can be displayed as they operate. This paper aims to bring an HMD-based overlay framework that can be used in the operating room. With a combination of Android Studio, OpenCV, and OpenGL, an inside-out localization method with Aruco Markers is demonstrated. The framework estimates the head pose of the user and subsequently renders a patient specific CT scan that will be spatially anchored to the real world. The CT reconstruction can then be virtually superimposed onto the physical patient. The HMD’s (ODG R9) fisheye lens will also be used to enhance and enable a larger field of view for better object detection. This paper also introduces a “focus mode” that improves the localization accuracy. The framework will be evaluated in each of the 3-axes for translational and rotational movement error. It will be evaluated on the detection accuracy of different numbers of markers and at different distances. It will also be evaluated using an ultra-high definition (UHD) camera.
The alert did not successfully save. Please try again later.
Philips Lai, Nhu Q. Nguyen, Joel Ramjist, Jamil Jivraj, Ryan Deorajh, Dimitrios Androutsos, Victor X. D. Yang, "Localization and spatially anchoring objects of an augmented reality headset using an onboard fisheye lens," Proc. SPIE 11225, Clinical and Translational Neurophotonics 2020, 112250H (17 February 2020); https://doi.org/10.1117/12.2566594