While Machine Learning (ML) Automatic Target Recognition (ATR) represents the state-of-the art in target recognition, model-based ATR plays a valuable role. Model-based ATR complements machine learning ATR approaches by filling a near-term niche. While explainable Artificial Intelligence (AI) is not yet fully realized, model-based ATR serves to validate machine learning recognition decisions, and thus instills confidence in ML target calls. Alternatively, model-based ATR can act as a stand-alone ATR component, particularly in scenarios in which a small number of targets are of interest, e.g., “target-of-the-day” engagements. Model-based ATR approaches need no training data, and thus provide an alternative to machine learning approaches in the absence of sufficient quantities of real, or sufficiently high-fidelity synthetic, training data. In this paper, we present an approach to model-based ATR, called Shape-Based ATR (SB-ATR), which captures salient target shape information for recognizing targets in wide-area satellite imagery. SB-ATR finds the right blend of coarse 3-D target shape abstraction and target realism to provide robustness against target variations and environmental operating conditions, while simultaneously providing high-performance target recognition. The approach uses newer, robust forms of image correlation for matching a predicted target shape against the image. Shape prediction searches over target pose, and uses satellite metadata and solar geometry to generate realistic target shape and shadow predictions. The correlation matchers provide tolerance to illumination variations, moderate occlusions, image distortions and noise, and geometric differences between models and real targets. We present technical details of the shape-based approach, and provide numerical target recognition results on real-world satellite imagery demonstrating performance.
The theory of opponent-sensor image fusion is based on neural circuit models of adaptive contrast enhancement and opponent-color interaction, as developed and previously presented by Waxman, Fay et al. This approach can directly fuse 2, 3, 4, and 5 imaging sensors, e.g., VNIR, SWIR, MWIR, and LWIR for fused night vision. The opponent-sensor images also provide input to a point-and-click fast learning approach for target fingerprinting (pattern learning and salient feature discovery) and subsequent target search. We have recently developed a real-time implementation of multi-sensor image fusion and target learning & search on a single board attached processor for a laptop computer. In this paper we will review our approach to image fusion and target learning, and demonstrate fusion and target detection using an array of VNIR, SWIR and LWIR imagers. We will also show results from night data collections in the field. This opens the way to digital fused night vision goggles, weapon sights and turrets that fuse multiple sensors and learn to find targets designated by the operator.
We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light Visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused Visible/MWIR/LWIR imagery.
We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS <i>Imagine</i>. In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused visible/MWIR/LWIR imagery.
We have extended our previous capabilities for fusion of multiple passive imaging sensors to now include 3D imagery obtained from a prototype flash ladar. Real-time fusion of low-light visible + uncooled LWIR + 3D LADAR, and SWIR + LWIR + 3D LADAR is demonstrated. Fused visualization is achieved by opponent-color neural networks for passive image fusion, which is then textured upon segmented object surfaces derived from the 3D data. An interactive viewer, coded in Java3D, is used to examine the 3D fused scene in stereo. Interactive designation, learning, recognition and search for targets, based on fused passive + 3D signatures, is achieved using Fuzzy ARTMAP neural networks with a Java-coded GUI. A client-server web-based architecture enables remote users to interact with fused 3D imagery via a wireless palmtop computer.
We present recent work on methods for fusion of imagery from multiple sensors for night vision capability. The fusion system architectures are based on biological models of the spatial and opponent-color processes in the human retina and visual cortex. The real-time implementation of the dual-sensor fusion system combines imagery from either a low-light CCD camera (developed at MIT Lincoln Laboratory) or a short-wave infrared camera (from Sensors Unlimited, Inc.) With thermal long-wave infrared imagery (from a Lockheed Martin microbolometer camera). Example results are shown for an extension of the fusion architecture to include imagery from all three of these sensors as well as imagery from a mid- wave infrared imager (from Raytheon Amber Corp.). We also demonstrate how the results from these multi-sensor fusion systems can be used as inputs to an interactive tool for target designation, learning, and search based on a Fuzzy ARTMAP neural network.
As part of an advanced night vision program sponsored by DARPA, a method for real-time color night vision based on the fusion of visible and infrared sensors has been developed and demonstrated. The work, based on principles of color vision in humans and primates, achieves an effective strategy for combining the complementary information present in the two sensors. Our sensor platform consists of a 640 X 480 low- light CCD camera developed at MIT Lincoln Laboratory and a 320 X 240 uncooled microbolometer thermal infrared camera from Lockheed Martin Infrared. Image capture, data processing, and display are implemented in real-time (30 fps) on commercial hardware. Recent results from field tests at Lincoln Laboratory and in collaboration with U.S. Army Special Forces at Fort Campbell will be presented. During the tests, we evaluated the performance of the system for ground surveillance and as a driving aid. Here, we report on the results using both a wide-field of view (42 deg.) and a narrow field of view (7 deg.) platforms.
We present an approach to color night vision through fusion of information derived from visible and thermal infrared sensors. Building on the work reported at SPIE in 1996 and 1997, we show how opponent-color processing and center-surround shunting neural networks can achieve informative multi-band image fusion. In particular, by emulating spatial and color processing in the retina, we demonstrate an effective strategy for multi-sensor color-night vision. We have developed a real- time visible/IR fusion processor from multiple C80 DSP chips using commercially available Matrox Genesis boards, which we use in conjunction with the Lincoln Lab low-light CCD and a Raytheon TI Systems uncooled IR camera. Limited human factors testing of visible/IR fusion is presented showing improvements in human performance using our color fused imagery relative to alternative fusion strategies or either single image modality alone. We conclude that fusion architectures that match opponent-sensor contrast to human opponent-color processing will yield fused image products of high image quality and utility.
MIT Lincoln Laboratory is developing new electronic night vision technologies for defense applications which can be adapted for civilian applications such as night driving aids. These technologies include (1) low-light CCD imagers capable of operating under starlight illumination conditions at video rates, (2) realtime processing of wide dynamic range imagery (visible and IR) to enhance contrast and adaptively compress dynamic range, and (3) realtime fusion of low-light visible and thermal IR imagery to provide color display of the night scene to the operator in order to enhance situational awareness. This paper compares imagery collected during night driving including: low-light CCD visible imagery, intensified-CCD visible imagery, uncooled long-wave IR imagery, cryogenically cooled mid-wave IR imagery, and visible/IR dual-band imagery fused for gray and color display.
We report progress on our development of a color night vision capability, using biological models of opponent-color processing to fuse low-light visible and thermal IR imagery, and render it in realtime in natural colors. Preliminary results of human perceptual testing are described for a visual search task, the detection of embedded small low-contrast targets in natural night scenes. The advantages of color fusion over two alterative grayscale fusion products is demonstrated in the form of consistent, rapid detection across a variety of low- contrast (+/- 15% or less) visible and IR conditions. We also describe advances in our development of a low-light CCD camera, capable of imaging in the visible through near- infrared in starlight at 30 frames/sec with wide intrascene dynamic range, and the locally adaptive dynamic range compression of this imagery. Example CCD imagery is shown under controlled illumination conditions, from full moon down to overcast starlight. By combining the low-light CCD visible imager with a microbolometer array LWIR imager, a portable image processor, and a color LCD on a chip, we can realize a compact design for a color fusion night vision scope.
We introduce an apparatus and methodology to support realtime color imaging for night operations. Registered imagery obtained in the visible through near IR band is combined with thermal IR imagery using principles of biological color vision. The visible imagery is obtained using a Gen III image intensifier tube optically coupled to a conventional CCD, while the thermal IR imagery is obtained using an uncooled thermal imaging array, the two fields of view being matched and imaged through a dichroic beam splitter. Remarkably realistic color renderings of night scenes are obtained, and examples are given in the paper. We also describe a compact integrated version of our system in the form of a color night vision device, in which the intensifier tube is replaced by a high resolution low-light sensitive CCD. Example CCD imagery obtained under starlight conditions is also shown. The system described here has the potential to support safe and efficient night flight, ground, sea and search & rescue operations, as well as night surveillance.
Neural network models of early visual computation have been adapted for processing single polarization (VV channel) SAR imagery, in order to assess their potential for enhanced target detection. In particular, nonlinear center-surround shunting networks and multi-resolution boundary contour/feature contour system processing has been applied to a spotlight sequence of tactical targets imaged by the Lincoln ADT sensor at 1 ft resolution. We show how neural processing can modify the target and clutter statistics, thereby separating the poplulations more effectively. ROC performance curves indicating detection versus false alarm rate are presented, clearly showing the potential benefits of neural pre-processing of SAR imagery.