We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, S<SUB>AB</SUB>. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.
An eye movements sequence, or scanpath, during viewing of a stationary stimulus has been described as a set of fixations onto regions-of-interest, ROIs, and the saccades or transitions between them. Such scanpaths have high similarity for the same subject and stimulus both in the spatial loci of the ROIs and their sequence; scanpaths also take place during recollection of a previously viewed stimulus, suggesting that they play a similar role in visual memory and recall.
Initiated by the Department of Energy's International Nuclear Safety Program, an effort in underway to deliver and employ a telerobotic diagnostic system for structural evaluation and monitoring within the Chornobyl Unit-4 shelter. A mobile robot, named Pioneer, will enter the damaged Chornobyl structure and employ devices to measure radiation, temperature and humidity; acquire core samples of concrete structures for subsequent engineering analysis; and make photo-realistic 3D maps of the building interior. This paper details the latter element, dubbed 'C-Map', the Chornobyl Mapping System. C-Map consists of an automated 3D modeling system using stereo computer vision along with an interactive, virtual reality software program to acquire and analyze the photo-realistic 3D maps of the damaged building interior.
Some applications require a user to consider both geometric and image information. Consider, for example, an interface that presents both a three-dimensional model of an object, built from a CAD model or laser-range data, and an image of the same object, gathered from a surveillance camera or a carefully calibrated photograph. The easiest way to provide these information sets to a user is in separate, side-by-side displays. A more effective alternative combines both types of information in a single, integrated display by projecting the image onto the model. A perspective transformation that assigns image coordinates to model vertices can visually engrave the image onto corresponding surfaces of the model. Combining the image and geometric information in this manner provides several advantages. It allows an operator to visually confirm the accuracy of the modeling geometry and also provides realistic textures for the geometric model. We discuss several of our procedural methods to implement the integrated displays and discuss the benefits gained from applying these techniques to projects including robotic hazardous waste remediation, the virtual exploration of Mars, and remote mobile robot control.