As spacecraft missions increase in scope and duration, the need for established focal planes to fulfill these mission requirements increases proportionally. The visible hybrid HAWAII-2RG (HgCdTe Astronomy Wide Area Infrared Imager with 2k x 2k resolution, reference and guide mode) has a venerable terrestrial-based telescope history. These focal planes are considered candidates for space applications. As a candidate focal plane, the responses of the HAWAII-2RG under nominal operating conditions in an ionizing debris gamma environment are discussed. Measurements in dark current frame captures and voltage bias currents are delineated.
The U.S. Army is currently investigating the differences between various bands in the midwave and long wave infrared spectrum. A holistic approach to quantifying scene information is used in previous research. That is, both natural backgrounds and vehicles are present in scenes when correlation analyses are performed. Similar research has also been performed using hyperspectral imagers. Hyperspectral imagers inherently have poor signal-to-noise ratio (SNR). In this research, a mid-wave infrared broadband sensor was cold-filtered to provide four sub-bands in the mid wave region. A multi-waveband sensor as used to collect midwave infrared imagery of military vehicles and natural backgrounds. Three blackbody sources were placed at the same range as the vehicles for radiometric calibration. The goals were to collect radiometrically corrected data of various targets and process this data for comparative analysis. The images were segmented to remove all unwanted imagery from the images under observation. Correlations were performed to assess the differences in information content.
The performance of any scene-adaptive Nonuniformity Correction (NUC) algorithm is fundamentally limited by the quality of the scene-based predicted value of each pixel. TARID-based composite imagery can serve as a scene-based pixel predictor with improved robustness, and reduced noise than that of more common scene-based pixel predictors. These improved properties result in dramatically faster algorithm convergence, generating corrected imagery with reduced spatial noise due to intrinsic nonuniform or inoperative pixels in a Focal Plane Array.
A general method is described to improve the operational resolution of an Electro-Optic (EO) imaging sensor using multiple frames of an image sequence. This method only assumes the constituent video has some ambient motion between the sensor and stationary background, and the optical image is electronically captured and digitally recorded by a staring focal plane detector array. Compared to alternative techniques that may require externally controlled or measured dither motion, this approach offers significantly enhanced operational resolution with substantially relaxed constraints on sensor stabilization, optics, and exposure time.
This paper outlines a generalized image reconstruction approach to improve the resolution of an Electro-Optic (EO) imaging sensor using multiple frames of an image sequence. This method only assumes the constituent video has some ambient motion between the sensor and stationary background, and the optical image is physically captured by a staring focal plane array.
Many imaging systems consist of a combination of distinct electro-optic sensors that are constrained to view the same scene through a common aperture or from a common platform. Oftentimes, a spatial registration of one sensor's image is required to conform to the slightly disparate imaging geometry of a different sensor on the same platform. This is generally achieved through a judicious selection of image tie-points and geometric transformation model. This paper outlines a procedure to improve any such registration technique by leveraging the temporal motion within a pair of video sequences and requiring an additional constraint of minimizing the disparity in optical flow between registered video sequences.
Imagery gathered by a Focal Plane Array (FPA) based sensor often suffers from the intrinsic non-uniform response of the individual detectors of the FPA. A digital Non-Uniformity Correction (NUC) can compensate for this distortion by implementing a functional transformation to the numerical output of each digitized FPA pixel. Such a NUC is often measured by exposing the sensor to one or more sources of uniform flux, and computed so that the post-NUC image of such uniform scenery has minimal spatial variation.
Alternative NUC implementations adopt a scene-adaptive approach , using only the data in the gathered video sequence for which one wants to NUC. Several implementations, such as temporal high-pass filtering, neural-network, steepest-descent, or adaptive LMS, fundamentally depend on scene-predicted image necessary compute the appropriate functional correction. Such predicted images are invariably a spatial transformation of a single frame of video; it is because of the limited accuracy of such a single-image prediction that mandates algorithm compromises between slow convergence and pathological collapses, such as image scene burn-in or image washout.
Previously reported research in image resolution enhancement mandates the construction of a Temporal Accumulation of Registered Image Data (TARID) composite image as a pre-processing step. Such TARID composite images have significantly improved accuracy and robustness over any single-frame predicted image applied to scene adaptive NUC algorithms, resulting in markedly improved performance in both convergence and stability.
Fusion of reflected/emitted radiation light sensors can provide significant advantages for target identification and detection. The two bands -- 0.6 - 0.9 or 1 - 2 micrometer reflected light and 8 - 12 micrometer emitted radiation -- offer the greatest contrast since those bands have the lowest correlation, hence the greatest amount of combined information for infrared imaging. Data from fused imaging systems is presented for optical overlay as well as digital pixel fusion. Advantages of the digital fusion process are discussed as well as the advantages of having both bands present for military operations. Finally perception tests results are presented that show how color can significantly enhance target detection. A factor of two reduction in minimum resolvable temperature difference is postulated from perception tests in the chromaticity plane. Although initial results do not yet validate this finding, it is expected with the right fusion algorithms and displays that this important result will be proven shortly.
This paper outlines a generalized image reconstruction approach that improves the resolution of an Electro-Optic (EO) imaging sensor based on multiple frame exposures during a temporal window of video. Such an approach is innovative in that it does not depend on controlled micro dithering of the camera, nor require the set of exposures to maintain any strictly defined transformation. It suffices to assume such video is physically captured by a focal plane array, and loosely requires some relative motion between sensor and subject.
Current target acquisition models are for monochrome imagery systems (single detector). The increasing interest in multispectral infrared systems and color daylight imagers highlights the need for models that describe the target acquisition process for color systems (2 or more detectors). This study investigates the detection of simple color targets in a noise color background.
A multi-spectral imaging system can be defined as a combination of electro-optic imagers that are mechanically constrained to view the same scene. Subsequent processing of the output imagery invariably requires a spatial registration of one spectral band image to geometrically conform to the imagery from a different sensor. This paper outlines a procedure to leverage motion estimation of a pair of video sequences to determine a transformation that minimizes the disparity in optical flow between the sequences.
Sensor fusion of up to three disparate imagers can readily be achieved by assigning each component video stream to a separate channel any standard RBG color monitor such as with television or personal computer systems. Provided the component imagery is pixel registered, such a straightforward systems can provide improved object-background separation, yielding quantifiable human-factors performance improvement compared to viewing monochrome imagery of a single sensor. Consideration is given to appropriate dynamic range management of the available color gamut, and appropriate color saturation in the presence of imager noise.
Increases in the power of personal computers and the availability of infrared focal plane array cameras allows new options in the development of real-time color fusion system for human visualization. This paper describes on-going development of an inexpensive, real-time PC=based infrared color visualization system. The hardware used in the system is all COTS, making it relatively inexpensive to maintain and modify. It consists of a dual Pentium II PC, with fast digital storage and up to five PCI frame-grabber cards. The frame-grabbers cards allow data to be selected from RS-170 (analog) or RS-422 (digital) cameras. Software allows the system configuration to be changed on the fly, so cameras can be swapped at will and new cameras can be added to the system in a matter of minutes. The software running on the system reads up to five separate images from the frame-grabber cards. These images are then digitally registered using a rubber-sheeting algorithm to reshape and shift the images. The registered data, from two or three cameras, is then processed by the selected fusion algorithm to produce a color-fused image, which is then displayed in real-time. The real-time capability of this system allows interactive laboratory testing of issues such as band selection, fusion algorithm optimization, and visualization trade-offs.
The emergence of new infrared sensor technologies and the availability of powerful, inexpensive computers have made many new imaging applications possible. Researchers working in the area of traditional image processing are showing an increased interest in working with infrared imagery. However, because of the inherent differences between infrared and visible phenomenology, a number of fundamental problems arise when trying to apply traditional processing methods to the infrared. Furthermore, the technologies required to image in the infrared are currently much less mature than comparable camera technologies used in visible imaging. Also infrared sensors need to capture six to eight additional bits of additional dynamic range over the eight normally used for visible imaging. Over the past half-century, visible cameras have become highly developed and can deliver images that meet engineering standards compatible with image displays. Similar image standards are often not possible in the infrared for a number of technical and phenomenological reasons. The purpose of this paper is to describe some of these differences and discuss a related topic known as image preprocessing. This is an area of processing that roughly lies between traditional image processing and image generation; because the camera images are less than ideal, additional processing is needed to perform necessary functions such as dynamic range management, non-uniformity correction, resolution enhancement, or color processing. A long-range goal for the implementation of these algorithms is to move them on-chip as analog retina-like or cortical-like circuits, thus achieving extraordinary reductions in power dissipation, size, and cost. Because this area of research is relatively new and still evolving, the discussion in this paper is limited to only a partial overview of the topic.
Multispectral sensors are increasingly being employed in military applications. Just as in satellite imagery of the earth, multispectral data is required in order to extract the maximum amount of information from a scene. The advantages of image fusion have been postulated for navigation, surveillance, fire control, and missile guidance to improve accuracy and contribute to mission success. The fusion process is a critical element of each of these applications. Imagery from various sensors must be calibrated, enhanced and spatially registered in order to achieve the desired 'fusion' of information into a single 'picture' for rapid assessment. In a tactical military environment this fusion of data must be presented to the end user in a timely and ergonomical fashion. The end user (e.g., a combat pilot) may already be operating at maximum sensory input capacity. Does he or she really need another cockpit display?
The concept of multi-band infrared color vision is discussed in terms of combining two or more bands of infrared imagery into a single composite color image. This work is motivated by emerging new technologies in which two or more infrared bands are simultaneously imaged for improved discrimination of objects from backgrounds. One of the current objectives of this work is to quantify the improvement obtained over single band infrared imagery to detect dim targets in clutter. Methods are discussed for mapping raw image data into an appropriate color space and then processing it to achieve an intuitively meaningful color display for a human viewer. In this regard, the final imagery should provide good color contrast between objects and backgrounds and consistent colors regardless of environmental conditions such as solar illumination and variations in surface temperature. Initial performance measures show that infrared color can improve discrimination significantly over single band imaging.