Range sensors that employ structured-light triangulation techniques often require calibration procedures, based on the system optics and geometry, to relate the captured image data to object coordinates. A Bernstein basis function (BBF) neural network that directly maps measured image coordinates to object coordinates is described in this paper. The proposed technique eliminates the need to explicitly determine the sensor’s optical and geometric parameters by creating a functional map between image-to-object coordinates. The training and test data used to determine the map are obtained by capturing successive images of the points of intersection between a projected light line and horizontal markings on a calibration bar, which is stepped through the object space. The surface coordinates corresponding to the illuminated pixels in the image are determined from the neural network. An experimental study that involves the calibration of a range sensor using a BBF network is presented to demonstrate the effectiveness and accuracy of this approach. The root mean squared errors for the x and y coordinates in the calibrated plane, 0.25 and 0.15 mm, respectively, are quite low and are suitable for many reverse engineering and part inspection applications. Once the network is trained, a hand carved wooden mask of unknown shape is placed in the work envelope and translated perpendicular to the projected light plane. The surface shape of the mask is determined using the trained network.