Translator Disclaimer
17 February 2012 Real-time registration of video with ultrasound using stereo disparity
Author Affiliations +
Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jihang Wang, Samantha Horvath, George Stetten, Mel Siegel, and John Galeotti "Real-time registration of video with ultrasound using stereo disparity", Proc. SPIE 8316, Medical Imaging 2012: Image-Guided Procedures, Robotic Interventions, and Modeling, 83162D (17 February 2012);


A semi-automatic annotation tool for cooking video
Proceedings of SPIE (March 06 2013)
Automated detection and classification of dice
Proceedings of SPIE (March 27 1995)
Visual monitoring of railroad grade crossing
Proceedings of SPIE (September 15 2004)
Mobile Robot Guidance By Visual Perception
Proceedings of SPIE (June 09 1986)

Back to Top