Laser range cameras (LRCs) are powerful tools for many robotic/computer perception activities. An LRC's output is an array of distances obtained by scanning a laser over the scene. To accurately interpret this data, the angular definition of each pixel must be known. Typically, the range data is converted to Cartesian coordinates by calibration-parameterized, non-linear transformation equations. This paper presents an automated method which uses genetic algorithms to search for calibration parameter values and possible transformation equations which combine to maximize the planarity of user-specified sub-regions of the image(s). This method permits calibration to be based on an arbitrary plane, without precise knowledge of the LRC's mechanical precision, intrinsic design, or its relative positioning to the target. Furthermore, this method permits rapid, remote, and on-line recalibration. Empirical validation of this system has been performed using two different LRC systems and has led to significant improvement in image accuracy while reducing the calibration time by orders of magnitude.