Northrop Grumman Amherst Systems has continued to improve the Real-time IR/EO Scene Simulator (RISS) for hardware-in-the-loop (HWIL) testing of infrared sensor systems. Several new and enhanced capabilities have been added to the system for both customer and internal development programs. A new external control capability provides control of either player trajectories or unit-under-test (UUT) orientation. The RISS Scene Rendering Subsystem (SRS) has been enhanced with support for texture transparency and increased texture memory capacity. The RISS Universal Programmable Interface (UPI) graphical user interface (GUI) has been improved to provide added flexibility and control of the real-time sensor modeling capabilities. This paper will further explore these and other product improvements.
This paper discuses recent advances in the development and applications of the Universal Programmable Interface. Development milestones and current performance benchmarks will be presented.
Comptek Amherst Systems has been involved in the development of a Universal Programmable Interface (UPI) for use in hardware-in-the-loop (HWIL) testing of infrared/electro-optic (IR/EO) sensor systems. The UPI provides an interface between a scene generation system (SGS) and the unit under test (UUT) for either direct injection or optical projection. Unlike custom interfaces, the reconfigurable UPI supports a wide range of sensor systems. It was designed to simulate various sensor effects, emulate bypassed sensor components, and reformat the data for input to the UUT. This paper discusses the advances we have made in the past year on the UPI, including those in both hardware and software.
This paper will discuss the sensor modeling capabilities of the Universal Programmable Interface and the supporting software and hardware. Sensor modeling capabilities include image blurring due to the sensor's modulation transfer function and pixel effects. A sensor modeling and analysis software tool, based on FLIR92, will be discussed. A technique for modeling other sensor effects will also be presented. This technique, called pixel displacement processing, can model geometric distortion, physical sensor jitter, and other user specified effects. It can also be used to accurately perform latency compensation.
The premise of foveal vision is that surveying a large area with low resolution to detect regions of interest, followed by their verification with localized high resolution, is a more efficient use of computational and communications throughput than resolving the area uniformly at high resolution. This paper presents target/clutter discrimination techniques that support the foveal multistage detection and verification of infrared-sensed ground targets in cluttered environments. The first technique uses a back-propagation neural network to classify narrow field-of-view high acuity image chips using their projection onto a set of principal components as input features. The second technique applies linear discriminant analysis on the same input features. Both techniques include refinements that address generalization and detected region of interest position errors. Experimental results using second generation forward looking infrared imagery are presented.
Foveal active vision features imaging sensors and processing with graded acuity, coupled with context-sensitive gaze control. The wide field of view of peripheral vision reduces target search time, but its low acuity makes it susceptible to preliminary false alarms when operating in environments with structured clutter. In this paper, we present a foveal active vision technique for multiresolution cueing that detects regions of interest (ROIs) with coarse resolution and subsequently interrogates with progressively higher resolution and ROIs are disambiguated. A hierarchical foveal machine vision framework with rectilinear retinotopology is used. A two-stage detector uses multiscale shape matching to identify potential targets and a chain of neural networks to filter out false alarms. This context-sensitive, coarse-to- fine approach minimizes the number of computationally expensive high acuity interrogates required, while preserving performance. Results from our experiments using second generation forward looking infrared imagery are presented.
This paper presents a target detection and interrogation techniques for a foveal automatic target recognition (ATR) system based on the hierarchical scale-space processing of imagery from a rectilinear tessellated multiacuity retinotopology. Conventional machine vision captures imagery and applies early vision techniques with uniform resolution throughout the field-of-view (FOV). In contrast, foveal active vision features graded acuity imagers and processing coupled with context sensitive gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision can operate more efficiently in dynamic scenarios with localized relevance than uniform acuity vision because resolution is treated as a dynamically allocable resource. Foveal ATR exploits the difference between detection and recognition resolution requirements and sacrifices peripheral acuity to achieve a wider FOV (e.g. faster search), greater localized resolution where needed (e.g., more confident recognition at the fovea), and faster frame rates (e.g., more reliable tracking and navigation) without increasing processing requirements. The rectilinearity of the retinotopology supports a data structure that is a subset of the image pyramid. This structure lends itself to multiresolution and conventional 2-D algorithms, and features a shift invariance of perceived target shape that tolerates sensor pointing errors and supports multiresolution model-based techniques. The detection technique described in this paper searches for regions-of- interest (ROIs) using the foveal sensor's wide FOV peripheral vision. ROIs are initially detected using anisotropic diffusion filtering and expansion template matching to a multiscale Zernike polynomial-based target model. Each ROI is then interrogated to filter out false target ROIs by sequentially pointing a higher acuity region of the sensor at each ROI centroid and conducting a fractal dimension test that distinguishes targets from structured clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.