Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene
generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of
parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current
NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing
units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take
advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the
central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware
In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between
conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language
(OpenCL), including performance trades and possible pathways for future tool development.
Hardware and software in the loop modeling of maritime environments involves a wide variety of complex physical and
optical phenomenology and effects. The scale of significant effects to be modeled range from the order of centimeters
for capillary type waves and turbulent wake effects up to many meters for rolling waves. In addition, wakes for boats
and ships operating at a wide variety of speeds and conditions provide additional levels of scene complexity. Generating
synthetic scenes for such a detailed, multi-scaled and dynamic environment in a physically realistic yet computationally
tractable fashion represents a significant challenge for scene generation tools. In this paper, next generation scene
generation codes utilizing personal computer (PC) graphics processors with programmable shaders as well as CUDA
(Compute Unified Device Architecture) and OpenCL (Open Computing Language) implementations will be presented.
Infrared detectors operating in two or more wavebands can be used to obtain emissivity-area, temperature, and related parameters. While the cameras themselves may not collect data in the two bands simultaneously in space or time, the algorithms used to calculate such parameters rely on spatial and temporal alignment of the true optical data in the two bands. When such systems are tested in a hardware-in-the-loop (HWIL) environment, this requirement for alignment is in turn imposed on the projection systems used for testing. As has been discussed in previous presentations to this forum, optical distortion and misalignment can lead to significant band-to-band and band-to-truth simulation errors. This paper will address the potential impact of techniques to remove these errors on typical two-color estimation algorithms, as well as improvements obtained using distortion removal techniques applied to HWIL data collected at the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) facility.
As discussed in a previous paper to this forum, optical components such as collimators that are part of many infrared projection systems can lead to significant distortions in the sensed position of projected objects versus their true position. The previous paper discussed the removal of these distortions in a single waveband through a polynomial correction process. This correction was applied during post-processing of the data from the infrared camera-under-test. This paper extends the correction technique to two-color infrared projection. The extension of the technique allows the distortions in the individual bands to be corrected, as well as providing for alignment of the two color channels at the aperture of the camera-under-test. The co-alignment of the two color channels is obtained through the application of the distortion removal function to the object position data prior to object projection.
Infrared projection systems commonly use a collimating optical system to make images of a projection device appear far away from the infrared camera observing the projector. These `collimators' produce distortions in the image seen by the camera. For many applications the distortions are negligible, and the major problem is simply shifting, rotating, and adjusting the magnification, so that the projector image is aligned with the camera. In a recent test performed in the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator facility, it was necessary to correct for distortions as small as 1/10th the size of the camera pixels across the field of view of the camera. This paper describes measurements and analyses performed to determine the optical distortions, and methods used to correct them.