30 April 1992 Depth perception by controlling focus
Author Affiliations +
Abstract
Vision systems are a possible choice to obtain sensorial data about the world in robotic systems. To obtain three-dimensional information using vision we can use different computer vision techniques such as stereo, motion, or focus. In particular, this work explores focus to obtain depth or structure perception of the world. In practice, focusing can be obtained by displacing the sensor plate with respect to the image plane, by moving the lens, or by moving the object with respect to the optical system. Moving the lens or sensor plate with respect to each other causes changes of the magnification and corresponding changes on the object coordinates. In order to overcome these problems, we propose varying the degree of focusing by moving the camera with respect to the object position. In our case, the camera is attached to the tool of a manipulator in a hand-eye configuration, with the position of the camera always known. This approach ensures that the focused areas of the image are always subjected to the same magnification. To measure the focus quality we use operators to evaluate the quantity of high-frequency components on the image. Different types of these operators were tested and the results compared.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jorge Miranda Dias, Jorge Miranda Dias, Helder Araujo, Helder Araujo, Joao E. Batista, Joao E. Batista, Anibal Traca de Almeida, Anibal Traca de Almeida, } "Depth perception by controlling focus", Proc. SPIE 1611, Sensor Fusion IV: Control Paradigms and Data Structures, (30 April 1992); doi: 10.1117/12.57924; https://doi.org/10.1117/12.57924
PROCEEDINGS
11 PAGES


SHARE
RELATED CONTENT


Back to Top