From Event: SPIE Defense + Commercial Sensing, 2019
Rendering synthetic imagery from gaming engine environments allows us to create data featuring any number of object orientations, conditions, and lighting variations. This capability is particularly useful in classification tasks, where there is an overwhelming lack of labeled data needed to train state-of-the-art machine learning algorithms. However, the use of synthetic data is not without limit: in the case of imagery, training a deep learning model on purely synthetic data typically yields poor results when applied to real world imagery. Previous work shows that "domain adaptation," mixing real-world and synthetic data, improves performance on a target dataset. In this paper, we train a deep neural network with synthetic imagery, including ordnance and overhead ship imagery and investigate a variety of methods to adapt our model to a dataset of real images.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Marissa Dotter, Chelsea Mediavilla, Jonathan Sato, Chris M. Ward, Shibin Parameswaran, and Josh Harguess, "Into the wild: a study in rendered synthetic data and domain adaptation methods," Proc. SPIE 10992, Geospatial Informatics IX, 109920D (Presented at SPIE Defense + Commercial Sensing: April 16, 2019; Published: 4 June 2019); https://doi.org/10.1117/12.2518774.