In this paper, we propose a new method for reconstructing a scene from different views through a high-distortion lens camera. Unlike other approaches, no a priori calibrations nor specific test patterns are required. Several pairs of correspondence between input images are used to estimate intrinsic parameters such as focal length and distortion coefficients. From these correspondences, relative movement of the camera between input images is computed as rotation matrices. We assumed radial lens distortion, modeled with a third order polynomial with two distortion coefficients, which covers highly distorted zoom lenses. Since we allow distortion with two coefficients and focal length to be unknown, it is not easy to get these three parameters explicitly from the correspondence alone. To avoid time consumption and the problem of local minima, we take the following steps: uniform searching in the reduced dimension; fitting a function to get a better guess of focal length; and polishing solutions by repeating the uniform search to get the final coefficients of distortion. The total number of evaluations is remarkably reduced by this multistage optimization. Some experimental results are presented, showing that more than 5% of lens distortion is reduced and the rotation of the camera is recovered, and we show a registration of four outdoor pictures.