7 September 2010 Performance of visual tasks from contour information
Author Affiliations +
Abstract
A recently proposed visual aid for patients with a restricted visual field (tunnel vision) combines a see-through head-mounted display (HMD) and a simultaneous minified contour view of the wide field image of the environment. Such a widening of the effective visual field is helpful for tasks such as visual search, mobility and orientation. The sufficiency of contours (outlines of the objects in the image) for performing everyday visual tasks by human observers is of major importance for this application, as well as for other applications, and for basic understanding of human vision. Due to their efficient properties as good object descriptors, contours are widely used in computer vision applications, and therefore many methods have been developed for automatic extraction of them from the image. The purpose of this research is to examine and compare the use of different types of automatically created contours, and contour representations, for practical everyday visual operations using commonly observed images. The visual operations include visual searching for items such as keys, remote control, etc. Considering different recognition levels, identification of an object is distinguished from detection (when it is not clearly identified). Some new non-conventional visual-based contour representations were developed for this purpose. Experiments were performed with normal vision subjects, by superposing contours of the wide-field of the scene, over a narrow field (see-through) background. Results show that about 85% success is obtained by for searched object identification when the best contour versions are employed.
© (2010) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yitzhak Yitzhaky, Liron Itan, "Performance of visual tasks from contour information", Proc. SPIE 7798, Applications of Digital Image Processing XXXIII, 779825 (7 September 2010); doi: 10.1117/12.860027; https://doi.org/10.1117/12.860027
PROCEEDINGS
11 PAGES


SHARE
Back to Top