KEYWORDS: Video, Mixed reality, Ultrasonography, Real time imaging, Video processing, Injuries, Imaging systems, Displays, Diseases and disorders, Augmented reality, Holographic displays, Ultrasound real time imaging
Ergonomics for image-guided procedures can be improved by using mixed reality headsets. Such headsets offer the ability to position holographic monitors that display information, such as an ultrasound stream, within the operator’s field of view during procedures. However, one of the barriers of clinical adoption of mixed reality headsets is high latency of information projected on to the headset. Upwards of 40% of the overall latency of the entire system can be due to video cards that capture procedural imaging for wireless streaming. The costs of the video cards can vary widely, from as low as $20 to upwards of several hundreds of dollars. The latencies of four separate video cards with a range of costs were evaluated. Based on these results, we propose an ideal tradeoff between latency and costs for clinical use of wirelessly mirroring procedural imaging into mixed reality headsets.
Augmented reality (AR) can enable physicians to “see” inside of patients by projecting cross-sectional imaging directly onto the patient during procedures. In order to maintain workflow, imaging must be quickly and accurately registered to the patient. We describe a method for automatically registering a CT image set projected from an augmented reality headset to a set of points in the real world as a first step towards real-time registration of medical images to patients. Sterile, radiopaque fiducial markers with unique optical identifiers were placed on a patient prior to acquiring a CT scan of the abdomen. For testing purposes, the same fiducial markers were then placed on a tabletop as a representation of the patient. Our algorithm then automatically located the fiducial markers in the CT image set, optically identified the fiducial markers on the tabletop, registered the markers in the CT image set with the optically detected markers and finally projected the registered CT image set onto the real-world markers using the augmented reality headset.The registration time for aligning the image set using 3 markers was 0.9 ± 0.2 seconds with an accuracy of 5 ± 2 mm. These findings demonstrate the feasibility of fast and accurate registration using unique radiopaque markers for aligning patient imaging onto patients for procedural planning and guidance.
Augmented reality (AR) can be used to visualize virtual 3D models of medical imaging in actual 3D physical space. Accurate registration of these models onto patients will be essential for AR-assisted image-guided interventions. In this study, registration methods were developed, and registration times for aligning a virtual 3D anatomic model of patient imaging onto a CT grid commonly used in CT-guided interventions were compared. The described methodology enabled automated and accurate registration within seconds using computer vision detection of the CT grid as compared to minutes using user-interactive registration methods. Simple, accurate, and near instantaneous registration of virtual 3D models onto CT grids will facilitate the use of AR for real-time procedural guidance and combined virtual/actual 3D navigation during image-guided interventions.
Dynamic contrast enhanced (DCE) MRI has emerged as a reliable and diagnostically useful functional imaging technique. DCE protocol typically lasts 3-15 minutes and results in a time series of N volumes. For automated analysis, it is important that volumes acquired at different times be spatially coregistered. We have recently introduced a novel 4D, or volume time series, coregistration tool based on a user-specified target volume of interest (VOI). However, the relationship between coregistration accuracy and target VOI size has not been investigated. In this study, coregistration accuracy was quantitatively measured using various sized target VOIs. Coregistration of 10 DCE-MRI mouse head image sets were performed with various sized VOIs targeting the mouse brain. Accuracy was quantified by measures based on the union and standard deviation of the coregistered volume time series. Coregistration accuracy was determined to improve rapidly as the size of the VOI increased and approached the approximate volume of the target (mouse brain). Further inflation of the VOI beyond the volume of the target (mouse brain) only marginally improved coregistration accuracy. The CPU time needed to accomplish coregistration is a linear function of N that varied gradually with VOI size. From the results of this study, we recommend the optimal size of the VOI to be slightly overinclusive, approximately by 5 voxels, of the target for computationally efficient and accurate coregistration.
The precision, accuracy, and efficiency of a novel semi-automated segmentation technique for VIBE MRI sequences was analyzed using clinical datasets. Two observers performed whole-kidney segmentation using EdgeWave software based on constrained morphological growth, with average inter-observer disagreement of 2.7% for whole kidney volume, 2.1% for cortex, and 4.1% for medulla. Ground truths were prepared by constructing ROI on individual slices, revealing errors of 2.8%, 3.1%, and 3.6%, respectfully. It took approximately 7 minutes to perform one segmentation. These improvements over our existing graph-cuts segmentation technique make kidney volumetry a reality in many clinical applications.