Sensor fusion in robotics, particularly for navigation of autonomous mobile robots, has typically been addressed as a “bottom-up” or data driven process. This has led to a variety of systems that, although somewhat successful, have been difficult to expand to include additional sensors or extend to other domains. The approach taken here is to specify and develop a control scheme which considers the sensor fusion process in the context of the intended actions of the robot, knowledge of the environment, and the available sensor suite.
The resulting control scheme exploits environmental knowledge in three ways in order to reduce processing. First, the control structure supports adaptation of the sensor fusion process to the environment and intended action. An appropriate set of candidate features is selected from the feature extraction library during the investigatory phase. Fusion occurs during the performatory phase in one of three global states: complete sensor fusion; fusion with the possibility of discordance and resultant recalibration of dependent perceptual sources; and fusion with the possibility of discordance and resultant suppression of discordant perceptual sources. Second, the states themselves use environmental knowledge to improve the fusion results as well as the sensing quality. Knowledge about how a sensor behaves under certain environmental conditions can lead to the exclusion of suspect readings from the fusion process. Third, the control scheme allows the system to respond to unexpected or catastrophic changes in the environment or sensors by permitting transitions between states. When an unacceptable discordance is detected between features, the investigatory phase is re-invoked, the system reconfigured, and instantiated in a new state.