21 May 2015 Semantically enabled image similarity search
Author Affiliations +
Georeferenced data of various modalities are increasingly available for intelligence and commercial use, however effectively exploiting these sources demands a unified data space capable of capturing the unique contribution of each input. This work presents a suite of software tools for representing geospatial vector data and overhead imagery in a shared high-dimension vector or embedding" space that supports fused learning and similarity search across dissimilar modalities. While the approach is suitable for fusing arbitrary input types, including free text, the present work exploits the obvious but computationally difficult relationship between GIS and overhead imagery. GIS is comprised of temporally-smoothed but information-limited content of a GIS, while overhead imagery provides an information-rich but temporally-limited perspective. This processing framework includes some important extensions of concepts in literature but, more critically, presents a means to accomplish them as a unified framework at scale on commodity cloud architectures.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
May V. Casterline, May V. Casterline, Timothy Emerick, Timothy Emerick, Kolia Sadeghi, Kolia Sadeghi, C. Alec Gosse, C. Alec Gosse, Brent Bartlett, Brent Bartlett, Jason Casey, Jason Casey, "Semantically enabled image similarity search", Proc. SPIE 9473, Geospatial Informatics, Fusion, and Motion Video Analytics V, 94730I (21 May 2015); doi: 10.1117/12.2177409; https://doi.org/10.1117/12.2177409

Back to Top