18 January 2010 Synchronizing real and predicted synthetic video imagery for localization of a robot to a 3D environment
Author Affiliations +
A mobile robot moving in an environment in which there are other moving objects and active agents, some of which may represent threats and some of which may represent collaborators, needs to be able to reason about the potential future behaviors of those objects and agents. In previous work, we presented an approach to tracking targets with complex behavior, leveraging a 3D simulation engine to generate predicted imagery and comparing that against real imagery. We introduced an approach to compare real and simulated imagery using an affine image transformation that maps the real scene to the synthetic scene in a robust fashion. In this paper, we present an approach to continually synchronize the real and synthetic video by mapping the affine transformation yielded by the real/synthetic image comparison to a new pose for the synthetic camera. We show a series of results for pairs of real and synthetic scenes containing objects including similar and different scenes.
© (2010) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Damian M. Lyons, Damian M. Lyons, Sirhan Chaudhry, Sirhan Chaudhry, D. Paul Benjamin, D. Paul Benjamin, } "Synchronizing real and predicted synthetic video imagery for localization of a robot to a 3D environment", Proc. SPIE 7539, Intelligent Robots and Computer Vision XXVII: Algorithms and Techniques, 75390S (18 January 2010); doi: 10.1117/12.838642; https://doi.org/10.1117/12.838642

Back to Top