A fusion technique is presented for 3-D representation. The fusion involves laser range data and segmented image data, using human expertise related to the environment. In particular, the range data may contain noise due to reflections on sloped surfaces or long open corridors; the proposed approach removes the noise by using human expertise from 3-D environments. This method could be utilized either by an autonomous robot in an unknown environment, or by an inspection machine in a complex manufacturing environment, or by a visual navigation device used by blind people. Additional applications for this technique are to correct measurement deficiencies in a laser scan and to provide a true color and 3-D perceived-shape representation for a given object in a modeling environment.