Chapter 8:
Camera Calibration
Authors(s): P. K. Sinha
Published: 2012
DOI: 10.1117/3.858360.ch8
Abstract

In mathematical terms, a camera maps all points on a 3D target object surface to a collection of 2D points on the image plane; a camera model thus relates the image coordinates to the physical locations of the object points in the FOV. Camera calibration refers to the process of deriving the internal (intrinsic) and external (extrinsic) parameters of the camera model and image-capture hardware. Intrinsic parameters embody the characteristics of the optical system and its geometric relationship with the image sensor, while extrinsic parameters relate the location and orientation of the camera with respect to the 3D object (Euclidean) space. The 3D object space is used for measuring the collection of object point coordinates [R{(xo, yo, zo)}] in physical units that make up the target scene in the FOV. Extrinsic parameters are derived as a set of rigid body transformation matrices: three rotations about the x, y, and z axes, and three translations along these axes (block 1 in Fig. 8.1). The two sets of output from the extrinsic calibration process are fed into the intrinsic calibration process to make up the complete camera calibration model. In cinematography, rotations about the x, y, and z axes are called pan, tilt, and roll, respectively, and the movement along the z axis is called zooming.

Online access to SPIE eBooks is limited to subscribing institutions.
CHAPTER 8
57 PAGES


SHARE
Back to Top