Three dimensional (3D) measurement of surfaces is an essential capability for robot systems. Few approaches exist in the literature that tackle the problem using different methodologies. One approach that found its way into some applications in the past ten years is the use of structured light. One major advantage of this technique is the ease of detecting the features that are artificially superimposed on the original scene. Another advantage is the possibility of extracting the 3D information from a single image as opposed to stereo techniques. Light patterns that are normally used are in the form of stripes with equal or varying interdistance or more generally a grid formed of vertical and horizontal stripes. One problem that limits the use of this technique is the need for precise reference calibration to compute the transformation matrix that is used to compute the true world coordinates from the image coordinates via triangulation. This necessitates the knowledge and the use of the true world coordinates of few control points. The process should be carried out with a high degree of accuracy under a controlled environment, or else it will affect all the computations. This normally calls for an expensive fixed setup that may be suitable for some industrial applications but is not flexible for dynamic robot vision applications. In this paper we propose a method where depth from focus algorithms is employed to compute the transformation matrix which is applied to stripe points to form a complete 3D surface solution. Depth from focus calibration can be performed beforehand by varying the distance of a test object. Thus, there is no need for the use of any reference control points in the scene. This adds flexibility to the vision system and makes it feasible for robot applications. A full depth from focus solution for the whole scene is computationally expensive as opposed to stripe points triangulation. In this way we gain the advantages of both methods.