Several useful but computationally expensive sensor processing tasks can be mapped to the natural behavior of networks of ideal, passive analog components. For example, spatially smoothing an image can be achieved by convolving it with a Gaussian kernel, or by applying it to a 2-D resistor-capacitor network, and then relying on the diffusive behavior of the network to provide a smoothed image. Numerical computation is replaced with physical computation. But implementing analog networks is challenging due to the limitations of real analog components. They have low precision, vary with temperature, and are non-uniform from unit to unit. Moreover, physics limits the size of analog components. For example, to achieve a particular capacitance, using material of a given dielectric constant, a VLSI capacitor must occupy a certain area on the chip. Twice the capacitance will require twice the area. We describe a set of digital circuits that emulate analog components. These circuits provide analog behavior with arbitrary precision, uniformity, noise immunity, and no temperature dependence. Their size is limited by VLSI linewidths and the circuit approach taken. Networks of these digital circuits behave as do their analog equivalents, making physical computation practical for sensor processing closely coupled to the FPA.
Proc. SPIE. 5204, Signal and Data Processing of Small Targets 2003
KEYWORDS: Target detection, Signal to noise ratio, Detection and tracking algorithms, Sensors, Image processing, Wavefronts, Wave propagation, Signal processing, Analog electronics, Algorithm development
The Tanner Research Wave Process is a moving point target detection algorithm that uses the spatio-temporal correlation of points from a target trajectory to build a large aggregate response, thereby increasing the probability of detection for dim and low-contrast point targets moving amidst dense background and noise. The Wave Process is naturally represented as a 2-D array of linear passive analog components, with each node directly stimulated by its focal
plane detector. The Wave Process can be implemented in compact, low-power hardware: analog VLSI for near-focal-plane integration, and dedicated digital for near-term applications, both with a fine-grain parallel architecture that can accommodate fast-frame-rate sensors. The Wave Process generates a real-time Region of Interest to window focal planes, reducing the data rate and sensor processing throughput requirements, thereby also reducing the overall sensor processor
power, weight, and size requirements.
Nova's development of the "Modular Infrared Imaging Applications Development System" (MIRIADS) produced a longwave infrared (LWIR) camera system that operated the "Adaptive Infrared Sensor" (AIRS) focal plane device produced by the Raytheon Infrared Operations (RIO) organization. A novel system architecture permitted the integration of an infrared fisheye lens system produced by Optics 1, Inc., which permitted a complete hemispherical field of view to be imaged onto the AIRS FPA. This paper will describe applications for this system as an extremely wide field-of-view IR sensor (early warning detection, fire detection, etc.), and will present test imagery collected with the system.
This technology advancement has been the result of the coordinated effort of a variety of companies and government agencies. This presentation will highlight significant contributions of individuals and will indicate the effectiveness of the Small Business Innovative Research (SBIR) program in helping to advance this nation's technology base.
It is envisioned that large, lightweight, optical mirrors for use in space will require onboard instrumentation for measuring and controlling the gross aspects of the mirror surface figure. An optical heterodyne array is well-suited for the figure sensing portion of this operation. The basic principles of heterodyne wave front sensing are reviewed and the advantages of this approach are highlighted. We describe the development of a dedicated circuitry module for integration with a sensor array for performing heterodyne array phase measurements.
In this paper we further develop the theoretical foundations of the Wave Process and demonstrate several of its capabilities in three areas. This is a continuation of work that has been reported earlier. The equations describing the ideal Wave Process have been used to find parameter values that optimally match the response to a particular target velocity, for two different cost functions. The response of the ideal Wave Process has been determined for targets moving at other than the optimum velocity. Demonstrations show the Wave Process responding to maneuvering targets. Finally, the Wave Process equations have been extended to address targets moving against background. This ability to select the moving target and suppress the background is demonstrated for images derived from weather satellite photos.
We have applied a feed-forward neural network to the task of resolving closely-spaced objects (CSO). Traditional algorithmic methods are computationally expensive or numerically unstable, and techniques based on ad hoc rules are too subjective. Our approach relies on the principle that a sufficiently complex neural network can approximate an arbitrary function to an arbitrary degree of accuracy. We train a neural network to approximate the multi- dimensional function that maps from detector signal space to CSO parameter space, using an aggressive Hessian-based training algorithm and training set examples synthesized from the known inverse function. We find two important empirical results: we can simultaneously identify when the training set size is sufficient to adequately represent the mapping function, and when the network has achieved optimum generalization capability, for a given degree of network complexity. Thus we can predict the network and training set sizes necessary to achieve a given mission performance. Finally, we show how such a network can be used to provide sub-pixel resolution capabilities for missions observing both single objects and CSOs, as part of a real-time 2D sensor processor.
In this paper we extend the capabilities of the Wave Process in three major areas. This is a continuation of work that was reported earlier. The Wave Process can now adapt its behavior to detect a point target moving with an arbitrary velocity while still rejecting stationary background and clutter. It can also adapt to different target velocities simultaneously occurring at different locations on the focal plane. This is achieved through neuromorphic methods, without recourse to clocks, programmability, or other aspects of the digital processing paradigm. We also present a single planar circuit that performs the functions of the positive, negative, and wave-sum planes, with a more efficient analog VLSI implementation for on- focal plane integration. Finally, we develop a more through understanding of the Wave Process performance through numerical simulations of the ideal modeling equations and SPICE simulations of the ideal hardware representations for stationary sources and moving targets.
Optically interfacing a modern focal plane to its processing system requires technologies and methods with which many may be unfamiliar. This paper discusses some of the alternatives available among light emitters, detectors, and conduits. We also describe AESD activities in using these devices to provide power and clock signals to a cryogenic focal plane.
Attainment of critical low noise performance of integrated focal plane subsystems (FPS) has been a difficult challenge, particularly the electrical interface between cryogenic and room temperature components. This paper presents a focal plane architecture for high performance scanning HgCdTe sensors which simplifies this interface and uses optical interconnects for improved noise immunity. Hardware testing has demonstrated operation at both component and integrated levels. Integrated FPS tests established noise as low as 50 (mu) V which is our goal. Our next step will be to evolve the FPS to demonstrate a fully flight compatible configuration.
When large-field IR sensors are quite distant from the scene, targets appear as points, so detection is based on their motion, rather than their structure. The appearance of background and clutter bright points, although eventually rejected as nontargets, unnecessarily burdens tracking algorithms. Typical approaches involve eliminating background prior to tracking, or only looking within dynamic search boxes based on previous-frame target observations. This presentation describes a method for distributed predetection of points due to moving targets, in which background points are be automatically rejected, and only those detector returns that are most likely to be from targets are be provided to the tracking algorithm. The paper discusses the retinally inspired concepts behind the proposed method, analytical and empirical evaluations of its performance, and a hardware implementation based on Mead's (1989) analog VLSI circuits, resulting in a fine-grained-parallel architecture suitable for on-focal-plane applications.