Vision systems are a possible choice to obtain sensorial data about the world in robotic systems. To obtain three-dimensional information using vision we can use different computer vision techniques such as stereo, motion, or focus. In particular, this work explores focus to obtain depth or structure perception of the world. In practice, focusing can be obtained by displacing the sensor plate with respect to the image plane, by moving the lens, or by moving the object with respect to the optical system. Moving the lens or sensor plate with respect to each other causes changes of the magnification and corresponding changes on the object coordinates. In order to overcome these problems, we propose varying the degree of focusing by moving the camera with respect to the object position. In our case, the camera is attached to the tool of a manipulator in a hand-eye configuration, with the position of the camera always known. This approach ensures that the focused areas of the image are always subjected to the same magnification. To measure the focus quality we use operators to evaluate the quantity of high-frequency components on the image. Different types of these operators were tested and the results compared.