Perception for ground robot mobility requires automatic generation of descriptions of the robot’s surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.
"Improving semantic scene understanding using prior information", Proc. SPIE 9837, Unmanned Systems Technology XVIII, 98370Q (13 May 2016); doi: 10.1117/12.2231111; https://doi.org/10.1117/12.2231111