Using computational models, images acquired pre-operatively can be updated to account for intraoperative brain shift in image-guided surgical (IGS) systems. An optically tracked textured laser range scanner (tLRS) furnishes the 3D
coordinates of cortical surface points (3D point clouds) over the surgical field of view and provides a correspondence
between these and the pre-operative MR image. However, integration of the acquired tLRS data into a clinically
acceptable system compatible throughout the clinical workflow of tumor resection has been challenging. This is because acquiring the tLRS data requires moving the scanner in and out of the surgical field, thus limiting the number of acquisitions. Large differences between acquisitions caused by tumor resection and tissue manipulation make it difficult to establish correspondence and estimate brain motion. An alternative to the tLRS is to use temporally dense feature-rich stereo surgical video data provided by the operating microscope. This allows for quick digitization of the cortical surface in 3D and can help continuously update the IGS system. In order to understand the tradeoffs between these approaches as input to an IGS system, we compare the accuracy of the 3D point clouds extracted from the stereo video system of the surgical microscope and the tLRS for phantom objects in this paper. We show that the stereovision system of the surgical microscope achieves accuracy in the 0.46-1.5mm range on our phantom objects and is a viable alternative to the tLRS for neurosurgical applications.