See-through augmented reality (AR) systems for image-guided surgery merge volume rendered MRI/CT data directly with the surgeon’s view of the patient during surgery. Research has so far focused on optimizing the technique of aligning and registering the computer-generated anatomical images with the patient’s anatomy during surgery. We have previously developed a registration and calibration method that allows alignment of the virtual and real anatomy to ~1mm accuracy. Recently we have been investigating the accuracy with which observers can interpret the combined visual information presented with an optical see-through AR system. We found that depth perception of a virtual image presented in stereo below a physical surface was misperceived compared to viewing the target in the absence of a surface. Observers overestimated depth for a target 0-2cm below the surface and underestimated the depth for all other presentation depths. The perceptual error could be reduced, but not eliminated, when a virtual rendering of the physical surface was displayed simultaneously with the virtual image. The findings suggest that misperception is due either to accommodation conflict between the physical surface and the projected AR image, or the lack of correct occlusion between the virtual and real surfaces.