17 April 2008 3D organization of 2D urban imagery
Author Affiliations +
Abstract
Working with New York data as a representative and instructive example, we fuse aerial ladar imagery with satellite pictures and Geographic Information System (GIS) layers to form a comprehensive 3D urban map. Digital photographs are then mathematically inserted into this detailed world space. Reconstruction of the photos' view frusta yields their cameras' locations and pointing directions which may have been a priori unknown. It also enables knowledge to be projected from the urban map onto georegistered image planes. For instance, absolute geolocations can be assigned to individual pixels, and GIS annotations can be transferred from 3D to 2D. Moreover, such information propagates among all images whose view frusta intercept the same urban map location. We demonstrate how many imagery exploitation challenges (e.g. identify objects in cluttered scenes, select all photos containing some stationary ground target, etc) become mathematically tractable once a 3D framework for analyzing 2D images is adopted. Finally, we close by briefly discussing future applications of this work to photo-based querying of urban knowledge databases.
© (2008) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Peter Cho, Peter Cho, } "3D organization of 2D urban imagery", Proc. SPIE 6968, Signal Processing, Sensor Fusion, and Target Recognition XVII, 696817 (17 April 2008); doi: 10.1117/12.777055; https://doi.org/10.1117/12.777055
PROCEEDINGS
9 PAGES


SHARE
Back to Top