PURPOSE: Current lumbar puncture simulators lack visual feedback of the needle path. We propose a lumbar puncture simulator that introduces a visual virtual reality feedback to enhance the learning experience. This method incorporates virtual reality and a position tracking system. We aim to assess the advantages of the stereoscopy of virtual reality (VR) on needle insertion skills learning. METHODS: We scanned and rendered spine models into three-dimensional (3D) virtual models to be used in the lumbar puncture simulator. The motion of the needle was tracked relative to the spine model in real-time using electromagnetic tracking, which allows accurate replay of the needle insertion path. Using 3D Slicer and SlicerVR, we created a virtual environment with the tracked needle and spine. In this study, 23 medical students performed a traditional lumbar puncture procedure using the augmented simulator. The participants’ insertions were tracked and recorded, allowing them to review their procedure afterwards. Twelve students were randomized into a VR group; they reviewed their procedure in VR, while the Control group reviewed their procedures on computer monitor. Students completed a standard System Usability Survey (SUS) about the system, and a self-reported confidence scale (1-5) in performing lumbar puncture. RESULTS: We integrated VR visual feedback in a traditional lumbar puncture simulator. The VR group gave an average 70.4 on the System Usability Survey (SUS) vs. 66.8 of the Control group. The only negative feedback on VR was that students felt they required technical assistance to set it up (SUS4). The results show a general affinity for VR and its easeof- use. Furthermore, the mean confidence level rose from 1.6 to 3.2 in the VR group, vs. 1.8 to 3.1 in the Control group (1.6 vs. 1.3 improvement). CONCLUSION: The VR-augmented lumbar puncture simulator workflow incorporates visual feedback capabilities and accurate tracking of the needle relative to the spine model. Moreover, VR feedback allow for a more comprehensive spatial awareness of the target anatomy for improved learning.
PURPOSE: Spatially tracked ultrasound-guided needle insertions may require electromagnetic sensors to be clipped on the needle and ultrasound probe if not already embedded in the tools. It is assumed that switching the electromagnetic sensor clip does not impact the accuracy of the computed calibration. We propose an experimental process to determine whether or not devices should be calibrated on a more frequent basis. METHODS: We performed 250 calibrations. Of these, 125 were performed on the needle and 125 on the ultrasound. Every five calibrations, the tracking clip was removed and reattached. Every 25 calibrations, the tracking clip was exchanged for an identical 3D-printed model. From the resulting transform matrices, coordinate transformations were computed. Data reproducibility was analyzed through looking at the difference between mean and grand mean, standard deviation and the Shapiro-Wilks normality constant. Data was graphically displayed to visualize differences in calibrations in different directions. RESULTS: For the needle calibrations, transformations parallel to the tracking clip and perpendicular to the needle demonstrated the greatest deviation. For the ultrasound calibrations, transformations perpendicular to the sound propagation demonstrated the greatest deviation. CONCLUSION: Needle and ultrasound calibrations are reproducible when changing the tracking clip. These devices do not need to be calibrated on a more frequent basis. Caution should be taken to minimize confounding variables such as bending the needle or ultrasound beam width at the time of calibration. KEY WORDS: Calibration, tracking, reproducibility, electromagnetic, spatial, ultrasound-guided needle navigation, transformation, standard deviation.
PURPOSE: There is a lack of open-source or free virtual reality (VR) software that can be utilized for research by medical professionals and researchers. We propose the design and implementation of such software. We also aim to assess the feasibility of using VR as a modality for navigating 3D visualizations of medical scenes. METHODS: To achieve our goal, we added VR capabilities to the open-source medical image analysis and visualization platform, 3D Slicer. We designed the VR extension by basing the software architecture on VTK’s vtkRenderingOpenVR software module. We extended this module by adding features such as full interactivity between 3D Slicer and the VR extension during VR use, variable volume rendering quality based on user headset motion etc. Furthermore, the VR extension was tested in a feasibility study in which participants were asked to complete specific tasks using bot the conventional mouse-monitor and VR method. For this experiment, we used 3D Slicer to create two virtual settings, each having an associated task. Participants were asked to maneuver the virtual settings using two approaches, the conventional method, using mouse and monitor, and VR using the head-mounted-display and controllers. The main outcome measure was total time to complete the task. RESULTS: We developed a VR extension to 3D Slicer—SlicerVirtualReality (SlicerVR). Additionally, from the experiment we conducted we found that when comparing mean completion times, participants, when using VR, were able to complete the first task 3 minutes and 28 seconds quicker than the mouse and monitor method (4 minutes and 24 seconds vs. 7 minutes and 52 seconds, respectively); and the second task 1 minute and 20 seconds quicker (2 minutes and 37 seconds, vs. 3 minutes and 57 seconds, respectively). CONCLUSION: We augmented the 3D Slicer platform with virtual reality capabilities. Experiments results show a considerable improvement in time required to navigate and complete tasks within complex virtual scenes compared to the traditional mouse and monitor method.