Limited autonomous behaviors are fast becoming a critical capability in the field of robotics as robotic applications are
used in more complicated and interactive environments. As additional sensory capabilities are added to robotic
platforms, sensor fusion to enhance and facilitate autonomous behavior becomes increasingly important. Using biology
as a model, the equivalent of a vestibular system needs to be created in order to orient the system within its environment
and allow multi-modal sensor fusion.
In mammals, the vestibular system plays a central role in physiological homeostasis and sensory information integration
(Fuller et al, Neuroscience 129 (2004) 461-471). At the level of the Superior Colliculus in the brain, there is multimodal
sensory integration across visual, auditory, somatosensory, and vestibular inputs (Wallace et al, J Neurophysiol 80
(1998) 1006-1010), with the vestibular component contributing a strong reference frame gating input. Using a simple
model for the deep layers of the Superior Colliculus, an off-the-shelf 3-axis solid state gyroscope and accelerometer was
used as the equivalent representation of the vestibular system. The acceleration and rotational measurements are used to
determine the relationship between a local reference frame of a robotic platform (an iRobot Packbot®) and the inertial
reference frame (the outside world), with the simulated vestibular input tightly coupled with the acoustic and optical
inputs. Field testing of the robotic platform using acoustics to cue optical sensors coupled through a biomimetic
vestibular model for "slew to cue" gunfire detection have shown great promise.