We report on a technique for reducing the image degradation introduced by viewing through deep turbulence. The approach uses a variable aperture that was designed to maintain the telescope’s theoretical resolving power. The technique combines the variable aperture sensor with post processing to form a turbulence restored image. Local wavefront tilt is corrected using local image registration. Lucky look processing performed in the frequency domain is used to combine the best aspects of each image in a sequence of frames to form the final image product. The approach was demonstrated on imagery of targets of opportunity on the Boston skyline observed through a 55-mile nearlyhorizontal path from Pack Monadnock in southern New Hampshire. Quantitative assessment of image quality is based on the MTF which is estimated from edges within the images. This is performed for imagery acquired with and without the variable aperture, and the effectiveness of the approach is evaluated by comparing the results. In most cases, the reduced aperture is found to improve performance significantly relative to the full aperture.
Multi-spectral sensor systems that record spatially and temporally registered image video have a variety of applications
depending on the spectral band employed and the number of colors available. The colors can be selected to highlight
physically meaningful portions of the image, and the resulting imagery can be used to decode relevant phenomenology.
For example, the images can be in spectral bands that identify materials that are intrinsic to the target while uncommon
in the backgound, providing an anomaly detection cue. These multi-spectral video sensor engines can also be employed
in conjunction with conventional fore-optics such as astronomical telescopes or microscopes to exploit useful
phenomenology at dissimilar scales. Here we explore the relevance of multi-spectral video in a space application. This
effort coupled a terrestrial multispectral video camera to an astronomical telescope. Data from a variety of objects in
Low Earth Orbit (LEO) were collected and analyzed both temporally, using light curves, and spectrally, using principal
component analysis (PCA). We find the spectral information is correlated with temporal information, and that the
spectral analysis adds the most value when the light curve period is long. The value of spectral-temporal signatures,
where the signature is the difference in either the harmonics or phase of the spectral light curves, is investigated with
An MWIR spectral imaging sensor based on dual direct vision prism (DVP) architecture is described. This sensor represents a third generation of the Chromotomographic Hyperspectral Imaging Sensor (CTHIS). In the new sensor, a direct vision prism is synthesized by the vector addition of the spectral response of two matched, but independently aligned DVP's. The resulting sensor dispersion varies from zero to twice the single prism dispersion, as a function of the
angle between the dispersion axes of the two prisms. The number of resolved channels, and the related signal strength per channel, also adapts with this angle. The "synthesized prism" projects a spectral image onto the focal plane array of an infrared camera. The prism is rotated on the camera axis and the resulting spectral information is employed to form an image cube (x, y, λ), using tomographic techniques. The sensor resolves from 1 to 105 spectral channels, between 3.0μm and 5.2μm wavelength. Spectral image data and image reconstruction is provided for standard test sources and scenes.
A novel spectral imaging sensor based on dual direct vision prisms is described. The prisms project a spectral image onto
the focal plane array of an infrared camera. The prism set is rotated on the camera axis and the resulting spectral
information is extracted as an image cube (x, y, λ), using tomographic techniques. The sensor resolves more than 40
spectral bands (channels) at wavelengths between 1.2 μm and 2.5 μm wavelength. The sensor dispersion characteristic is
determined by the vector sum of the dispersions of the two prisms. The number of resolved channels, and the related
signal strength per channel, varies with the angle between the prism dispersion axes. This is a new capability for this
class of spectral imaging sensor. Reconstructed short-wave imagery and spectral data is presented for field and
laboratory scenes and for standard test sources.
The detection, determination of location, and identification of unknown and uncued energetic events within a large field of view represents a common operational requirement for many staring sensors. The traditional imaging approach involves forming an image of an extended scene and then rejecting background clutter. However, some important targets can be limited to a class of energetic, transient, point-like events, such as explosions, that embed key discriminants within their emitted, temporally varying spectra; for such events it is possible to create an alternative sensor architecture tuned specifically to these objects of interest. The resulting sensor operation, called pseudo imaging, includes: optical components designed to encode the scene information such that the spectral-temporal signature from the event and its location are easily derived; and signal processing intrinsic to the sensor to declare the presence of an event, locate the event, extract the event spectral-temporal signature, and match the signature to a library in order to identify the event.
This treatise defines pseudo imaging, including formal specifications and requirements. Two examples of pseudo imaging sensors are presented: a sensor based on a spinning prism, and a sensor based on an optical element called a Crossed Dispersion Prism. The sensors are described, including how the sensors fulfill the definition of pseudo imaging, and measured data is presented to demonstrate functionality.
Spectral imaging is the art of quantifying the spectral and spatial characteristics of a scene. The current state of the art in spectral imaging comprises a wide range of applications and sensor designs. At the extremes are spectrometers with high spectral sampling over a limited number of imaging pixels and those with little spectral sampling over a large number of pixels. The predominant technical issue concerns the acquisition of the three-dimensional spectral imagery (X,Y,l) using an inherently two-dimensional imaging array; consequently, some form of multiplexing must be implemented. This paper will discuss a new class of sensors, broadly referred to as Spectral Temporal Sensors (STS), which capture the position and spectra of uncued point sources anywhere in the optical field. These sensors have large numbers of pixels (>512x512) and colors (>50). They can be used to sense explosions, combustion, rocket plumes, LASERs, LEDs, LASER/LED excitations and the outputs of fiber optic cables. This paper will highlight recent developments on an STS that operates in a Pseudo-imaging (PI) mode, where the location of an uncued dynamic event and its spectral evolution in time are the data products. Here we focus on the sensor's ability to locate the event to within approximately 1/20th pixel, however we will also discuss its capabilities at fully characterizing event spectral temporal signature at rates greater than 100Hz over a large field of view (greater than 30°).
A very simple and fast technique for clustering/segmenting hyperspectral images is described. The technique is based on the histogram of divergence images; namely, single image reductions of the hyperspectral data cube whose values reflect spectral differences. Multi-value thresholds are set from the local extrema of such a histogram. Two methods are identified for combining the information of a pair of divergence images: a dual method of combining thresholds generated from 1D histograms; and a true 2D histogram method. These histogram-based segmentations have a built-in fine to coarse clustering depending on the extent of smoothing of the histogram before determining the extrema. The technique is useful at the fine scale as a powerful single image display summary of a data cube or at the coarser scales as a quick unsupervised classification or a good starting point for an operator-controlled supervised classification. Results will be shown for visible, SWIR, and MWIR hyperspectral imagery.
Recently, a new approach to hyperspectral imaging, relying on the theory of computed tomography, was proposed by researchers at the Air Force Research Laboratory. The approach allows all photons to be recorded and therefore increases robustness of the imaging system to noise and focal plane array non-uniformities. However, as all computed tomography systems, the approach suffers form the limited angle problem, which obstructs reconstruction of the hyperspectral information. In this work we present a direct, one-step algorithm for reconstruction of the unknown information based on a priori knowledge about the hyperspectral image.
This paper reports on the design, performance and signal processing of a visible/near infrared (VIS-NIR) chromotomographic hyperspectral imaging sensor. The sensor consists of a telescope, a direct vision prism, and a framing video camera. The direct vision prism is a two-prism set, arranged such that one wavelength passes undeviated, while the other wavelengths are dispersed along a line. The prism is mounted on a bearing so that it can be rotated on the optical axis of the telescope. As the prism is rotated, the projected image is multiplexed on elements of the focal plane array. Computational methods are used to reconstruct the scene at each wavelength; an approach similar to the limited-angle tomography techniques used in medicine. The sensor covers the visible through near infrared spectrum of silicon photodiodes. The sensor weighs less than 6 pounds has under 300 in<SUP>3</SUP> volume and requires 20 watts. It produces image cubes, with 64 spectral bands, at rates up to 10 Hz. By operating in relatively fast framing mode, the sensor allows characterization of transient events. We will describe the sensor configuration and method of operation. We also present examples of sensor spectral image data.
We present a new algorithm for chromotomographic image restoration. The main stage of the algorithm employs the iterative method of projections onto convex sets, utilizing a new constraint operator. The constraint takes advantage of hyperspectral data redundancy and information compacting ability of singular value decomposition to reduce noise and artifacts. Results of experiments on both in-house and AVIRIS data demonstrate that the algorithm converges rapidly and delivers high image fidelity.
This paper addresses the issue of identifying conduction objects based on their response to low frequency magnetic fields -- an area of research referred to by some as magnetic singularity identification (MSI). Real time identification was carried out on several simple geometries. The low frequency transfer function of these objects was measured for both cardinal and arbitrary orientations of the magnetic field with respect to the planes of symmetry of the objects (i.e., different polarizations). Distinct negative real axis poles (singularities) associated with each object form the basis for our real-time identification algorithm. Recognizing this identification problem as one of inference form incomplete information, application of Bayes theorem leads to a generalized likelihood ratio test (GLRT) as a solution to the M-ary hypothesis testing problem of interest here. Best performance, measured through Monte Carlo simulation presented in terms of percent correct identification versus signal-to- noise ratio, was obtained with a single pole per object orientation.
In an earlier conference, we introduced a powerful class of temporal filters, which have outstanding signal to clutter gains in evolving cloud scenes. The basic temporal filter is a zero-mean damped sinusoid, implemented recursively. Our final algorithm, a triple temporal filter, consists of a sequence of tow zero-mean damped sinusoids followed by an exponential averaging filter along with an edge suppression factor. The algorithm was designed, optimized and tested using a real world database. We applied the Simplex algorithm to a representative subset of our database to find an improved set of filter parameters. Analysis led to two improved filters: one dedicated to benign clutter conditions and the other to cloud clutter-dominated scenes. In this paper, we demonstrate how a fused version of the two optimized filters further improves performance in severe cloud clutter scenes. The performance characteristics of the filters will be detailed by specific examples and plots. Real time operation has been demonstrated on laboratory IR cameras.
Chromotomographic spectral imaging techniques offer high spatial resolution, moderate spectral resolution and high optical throughput. However, the performance of chromotomographic systems has historically been limited by the artifacts introduced by a cone of missing information. The recent successful application of principal component analysis to spectral imagery indicates that spectral imagery is inherently redundant. We have developed an iterative technique for filling in the missing cone that relies on this redundance. We demonstrate the effectiveness of our approach on measured data, and compare the results to those obtained with a scanned slit configuration.
To realize the potential of modern staring IR technology as the basis for an improved IRST, one requires better algorithms for detecting unresolved targets moving at fractions of a pixel per frame time. While available algorithms for such targets in white noise are reasonably good, they have high false alarm rates in non-stationary clutter, such as evolving clouds. We review here a new class of temporal filters which have outstanding signal to clutter gains in evolving clouds and still retain good signal to temporal noise sensitivity in blue sky or night data. The generic temporal filter is a damped sinusoid, implemented recursively. Our final algorithm, a triple temporal filter (TTF) based on six parameters, consists of a sequence of two damped sinusoids followed by an exponential averaging filter, with an edge suppression feature. Initial tests of the TTF filter concept demonstrated excellent performance in evolving cloud scenes. Three 'trackers' based on the TTF operate in real-time hardware on laboratory IR cameras including: an empirical initial version; and tow recent forms identified by an optimization routine. The latter two operate best in the two distinct realms: one for evolving cloud clutter, the other for temporal nose-dominated scenes such as blue sky or stagnant clouds. Results are presented both as specific examples and metric plots over an extensive database of local scenes with targets of opportunity.
To realize the potential of modern staring IR technology as the basis for an improved IRST, one requires better algorithms for detecting unresolved targets moving at fractions of a pixel per frame time. While available algorithms for such targets in white noise are reasonably good, they have high false alarm rates in non-stationary clutter, such as evolving clouds. We review here a new class of temporal filters which have outstanding signal to clutter gains in evolving clouds and still retain good signal to temporal noise sensitivity in blue sky or night data. The generic temporal filter is a damped sinusoid, implemented recursively. Our final algorithm, a triple temporal filter (TTF) based on six parameters, consists of a sequence of two damped sinusoids followed by an exponential averaging filter, along with an edge suppression feature. Initial tests of the TTF filter concept demonstrated excellent performance in evolving cloud scenes. Three 'trackers' based on the TTF operate in real-time hardware on laboratory IR cameras including: an empirical initial version; and two recent forms identified by an optimization routine. The latter two operate best in the two distinct realms: one for evolving cloud clutter, the other for temporal noise- dominated scenes such as blue sky or stagnant clouds. Results are presented both as specific examples and metric plots over an extensive database of local scenes with targets of opportunity.
We describe an infrared stereo imaging method for the 3D target tracking of distant moving point sources. In scenes which are typically lacking in significant features, correspondence between the two camera images is simplified by the application of a triple temporal filter to consecutive frames of data. This filter simultaneously accentuates the target and suppresses background clutter. We apply this stereo tracking technique to experimental range measurements in which the target is tracked with sub-pixel precision.
The responsivity of large scale platinum silicide arrays, having small pixels, is low compared to the responsivity of large area test diodes fabricated on the same wafer. Often, the responsivity loss is described by assigning a lower Fowler emission coefficient to the detectors. We find the reduced responsivity to be the direct result of a reduction in the effective active area of the detector. This reduction in effective active area becomes more pronounced as the detector cell size is reduced. We provide a simple model for the area reduction in terms of modulation of detector Schottky potential by the underlying depletion region of the detector guard ring. We also suggest changes in the detector array unit cell design, which will maximize responsivity.
The problem of detection of aircraft at long range in a background of evolving cloud clutter is treated. A staring infrared camera is favored for this application due to its passive nature, day/night operation, and rapid frame rate. The rapid frame rate increases the frame-to-frame correlation of the evolving cloud clutter; cloud-clutter leakage is a prime source of false alarms. Targets of opportunity in daytime imagery were used to develop and compare two algorithm approaches: banks of spatio-temporal velocity filters followed by dynamic-programming-based stage-to-stage association, and a simple recursive temporal filter arrived at from a singular-value decomposition analysis of the data. To quantify the relative performance of the two approaches, we modify conventional metrics for signal-to-clutter gains in order to make them more germane to consecutive frame real data processing. The temporal filter, in responding preferentially to pixels influenced by moving point targets over those influenced by drifting clouds, achieves impressive cloud-clutter suppression without requiring subpixel frame registration. The velocity filter technique is roughly half as effective in clutter suppression but is twice as sensitive to weak targets in white noise (close to blue sky conditions). The real-time hardware implementation of the temporal filter is far more practical.
In the companion paper, two algorithms for tracking point targets in consecutive frame staring IR imagery with evolving cloud clutter are described and compared by using representative example scenes. Here, our total data base of local airborne scenes with targets of opportunity are used for a more quantitative and comprehensive comparison. The use of real world data as well as our focus on temporal filtering over large number of consecutive frames triggered a search for more relevant metrics than those available. We present two new metrics which have most of the attributes sought. In each metric, gain is taken as a ratio of output to input signal to clutter. Maximum values rather than statistical measures are used for clutter. In the variation metric (VM), a temporal standard deviation for each pixel over 95 consecutive frames is computed and the maximum non-target result is taken as the input clutter. The input signal, a real target moving with sub-pixel velocity through sampled imagery, is estimated by a reference mean technique. Output signal and clutter are taken as maximum target and clutter affected pixels in algorithm filtered outputs. In the second metric, the use of an anti-median filter (AM) provides symmetric treatment of input and output as well as signal and clutter. The maximum target and non-target response to the AM filter on input frames and output frames defines the signal and clutter measures. Our set of real-world data is plotted as output versus input signal to clutter for each metric and each algorithm and the pros and cons of each metric is discussed. With either metric, the signal to clutter gain ratios are approximately 5 - 6 dB greater with the temporal filter algorithm than with the velocity filter algorithm.
We treat the problem of long range aircraft detection in the presence of evolving cloud clutter. The advantages of a staring infrared camera for this application include passive performance, day and night operation, and rapid frame rate. The latter increases frame correlation of evolving clouds and favors temporal processing. We used targets of opportunity in daytime imagery, which had sub-pixel velocities from 0.1 - 0.5 pixels per frame, to develop and assess two algorithmic approaches. The approaches are: (1) banks of spatio-temporal velocity filters followed by dynamic programming based stage-to-stage association, and (2) a simple recursive temporal filter suggested by a singular value decomposition of the consecutive frame data. In this paper, we outline the algorithms, present representative results in a pictorial fashion, and draw general conclusions on the relative performance. In a second paper, we quantify the relative performance of the two algorithms by applying newly developed metrics to extensive real world data. The temporal filter responds preferentially to pixels influenced by moving point targets over those influenced by drifting clouds and thus achieves impressive cloud clutter suppression without requiring sub-pixel frame registration. It is roughly twice as effective in clutter suppression when results are limited by cloud evolution. However when results are limited by temporal noise (close to blue sky conditions), the velocity filter approach is roughly twice as sensitive to weak targets in our velocity range. Real-time hardware implementation of the temporal filter is far more practical and is underway.
A spectral imager constructs a 3D (two spatial and one spectral) image from a series of 2D images. This paper discusses a technique for spectral imaging that multiplexes the spatial and spectral information on the focal plane, then demultiplexes the resulting imagery to obtain the spectral image. The resulting spectral image consists of 184 X 184 spatial pixels and 40 spectral bands. The current implementation operates over the 3-5 micrometers band, but can easily be applied to other spectral regions. A hardware description, the mathematical development and experimental results are presented.
The performance of starting PtSi infrared cameras is characterized based on estimating their spatial frequency response. Applying a modified knife-edge technique, we arrive at an estimate of the edge spread function (ESF), which is used to obtain a profile through the center of the 2-D modulation transfer function (MTF). Using this technique, the complete system MTF in the horizontal and vertical direction is measured for various imaging systems. The influence of charge transfer efficiency (CTE) on the knife-edge measurement and resulting MTF is also modeled and discussed. An estimate of the OlE can actually be obtained from the shape of the ESF in the horizontal direction. In addition, we demonstrate that this technique can be used as a field measurement. By applying the technique at long range, the MTF of the atmosphere can be measured.
Our algorithm development for point target surveillance is closely meshed to our laboratory IR cameras. The two-stage approach falls into the category of `track before detect' and incorporates dynamic programming optimization techniques. The first stage generates merit scores for each pixel and suppresses clutter by spatial/temporal subtractions from N registered frames of data. The higher the value of the merit score, the more likely that a target is present. In addition to the merit score, the best track associated with each score is stored; together they comprise the merit function. In the second stage, merit functions are associated and dynamic programming techniques are used to create combined merit functions. Nineteen and thirteen frames of data are used to accumulate merit functions. Results using a total of 38 and 39 frames of data are presented for a set of simulated targets embedded in white noise. The result is a high probability of detection and low false alarm rate down to a signal to noise ratio of about 2.0. Preliminary results for some real targets (extracted from real scenes and then re- embedded in white noise) show a graceful degradation from the results obtained on simulated targets.
This work focuses on characterizing the performance of various staring PtSi infrared cameras, based on estimating their spatial frequency response. Applying a modified knife edge technique, we arrive at an estimate of the edge spread function (ESF), which is used to obtain a profile through the center of the two-dimensional Modulation Transfer Function (MTF). The MTF of various cameras in the horizontal and vertical direction is measured and compared to the ideal system MTF. The influence of charge transfer efficiency (CTE) on the knife edge measurement and resulting MTF is also modeled and discussed. An estimate of the CTE can actually be obtained from the shape of the ESF in the horizontal direction. The effect of pixel fill factor on the estimated MTF in the horizontal and vertical directions is compared and explained.
This paper summarizes the discussion of the MRT working group that was held last year at this meeting. As is the case with most working groups, consensus was difficult to obtain. One thing that the group was in agreement about was that MRT is an imperfect figure of merit. In this paper, I present many of the issues that were discussed at the meeting along with my views about what the future holds in store for MRT.
The development of staring infrared focal plane arrays has forced potential users to consider the effect of spatial noise on the performance of infrared sensors. In this work, we varied the amount of spatial noise present in infrared imagery and measured its effect on the value of the minimum resolvable temperature (MRT). A mathematical model for including the effects of spatial noise on image quality is presented and compared to experimental data.
The platinum silicide power spectrum found on p-type silicon Schottky diodes was measured for the diodes available on an IR FPA. The noise from the diodes is shown to have a white power spectrum even at frequencies below 3.0 x 10 to the -5th Hz. The data generated from each pixel were digitized into twelve bits, and tranferred by a GPIO bus to a computer. The unit cell, camera system response, low frequency drift and mutual drift compensation techniques, and optimization of the charge transfer efficiency are explained. The modeled response, and the sample of the observed power spectrum for three diodes, are presented. 1/f noise is characterized as ubiquitous in nature and nonuniformity correction techniques are effective, but the inconsistency with current 1/f models elicits a discussion of potential flaws in the experiment. Sensitivities to two terms are found in the measurement technique, and if the product of the terms is more than the diode power spectrum the estimates of the power spectrum of an individual diode cannot be accurate. It is concluded that 1/f noise may be completely absent from PtSi Schottky diodes.
Twelve-bit digitized images taken with PtSi Schottky barrier detector arrays have been processed on Sun work stations. Two techniques for 8-bit global display are compared: the standard method of histogram equalization and a newly devised technique of histogram projection. The latter assigns equal dynamic range to each occupied level, while the former
does so according to the density of the occupied levels. The projection technique generally gives distinctly superior results based on an extensive
set of indoor, outdoor, day, and night imagery. For cases in which the two algorithms have complementary advantages, the techniques can be combined in effect by a weighting of their distribution functions, which often gives the desirable features of each. The new projection algorithm also can be used as a powerful and robust local contrast enhancement technique.
An alternative method of contrast enhancement, a global algorithm based on modular (sawtooth) displays, affords a comparable degree of enhancement at less computational cost.