14 May 2018 Enhanced backgrounds in scene rendering with GTSIMS
Author Affiliations +
A core component to modeling visible and infrared sensor responses is the ability to faithfully recreate background noise and clutter in a synthetic image. Most tracking and detection algorithms use a combination of signal to noise or clutter to noise ratios to determine if a signature is of interest. A primary source of clutter is the background that defines the environment in which a target is placed. Over the past few years, the Electro-Optical Systems Laboratory (EOSL) at the Georgia Tech Research Institute has made significant improvements to its in house simulation framework GTSIMS. First, we have expanded our terrain models to include the effects of terrain orientation on emission and reflection. Second, we have included the ability to model dynamic reflections with full BRDF support. Third, we have added the ability to render physically accurate cirrus clouds. And finally, we have updated the overall rendering procedure to reduce the time necessary to generate a single frame by taking advantage of hardware acceleration. Here, we present the updates to GTSIMS to better predict clutter and noise doe to non-uniform backgrounds. Specifically, we show how the addition of clouds, terrain, and improved non-uniform sky rendering improve our ability to represent clutter during scene generation.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Keith F. Prussing, Keith F. Prussing, Oliver Pierson, Oliver Pierson, Chris Cordell, Chris Cordell, John Stewart, John Stewart, Kevin Nielson, Kevin Nielson, "Enhanced backgrounds in scene rendering with GTSIMS", Proc. SPIE 10625, Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXIX, 106250O (14 May 2018); doi: 10.1117/12.2304864; https://doi.org/10.1117/12.2304864


Back to Top