Measurements characterizing spatial mode filtering of mid-infrared (mid-IR) laser beams using hollow core fiber optics
are presented. The mode filtering depends strongly on the fiber diameter, with effective mode filtering demonstrated
with bore diameters of d = 200 μm and 300 μm. In addition to mode filtering, beam profile measurements also
demonstrate the strong dependence of the mode quality on the fiber coupling conditions. As predicted, optimal coupling
is achieved using relatively slow optics that produce focused spots that nearly fill the fiber diameter. Examples of the
utility of using hollow fibers for mode-filtering to improve molecular spectroscopy experiments are also discussed.
The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant
extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of
hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a
single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging
spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in
each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has
300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional
1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral
data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements
every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a
wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea"
mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context
camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data
(e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time
display and analysis. The system concept has a range of applications including biomedical imaging, missile defense,
infrared counter measure (IRCM) threat characterization, and ground based remote sensing.
We describe the development and testing of hollow core glass waveguides (i.e., fiber optics) for use in Mid-Wave
Infrared (MWIR) and Long-Wave Infrared (LWIR) spectroscopy systems. Spectroscopy measurements in these
wavelength regions (i.e., from 3 to 14 μm) are useful for detecting trace chemical compounds for a variety of security
and defense related applications, and fiber optics are a key enabling technology needed to improve the utility and
effectiveness of detection and calibration systems. Hollow glass fibers have the advantage over solid-core fibers (e.g.,
chalcogenide) in that they are less fragile, do not produce cladding modes, do not require angle cleaving or antireflection
coatings to minimize laser feedback effects, and effectively transmit deeper into the infrared. This paper
focuses on recent developments in hollow fiber technology geared specifically for infrared spectroscopy, including
single mode beam delivery with relatively low bending loss. Results are presented from tests conducted using both
Quantum Cascade Lasers (QCL) and CO2 lasers operating in the LWIR wavelength regime. Single-mode waveguides
are shown to effectively deliver beams with relatively low loss (~ 1 dB/m) and relatively high beam quality. The fibers
are also shown to effectively mode-filter the "raw" multi-mode output from a QCL, in effect damping out the higher
order modes to produce a circularly symmetric Gaussian-like beam profile.
A fully functional, prototype night vision camera system is described which produces true-color imagery, using a
visible/near-infrared (VNIR) color EMCCD camera, fused with the output from a thermal long-wave infrared (LWIR)
microbolometer camera. The fusion is performed in a manner that displays the complimentary information from both
sources without destroying the true-color information. The system can run in true-color mode in day-light down to about
1/4-moon conditions, below this light level the system can function in a monochrome VNIR/LWIR fusion mode. An
embedded processor is used to perform the fusion in real-time at 30 frames/second and produces both digital and analog
color video outputs. The system can accommodate a variety of modifications to meet specific user needs, and various
additional fusion algorithms can be incorporated making the system a test-bed for real time fusion technology under a
variety of conditions.
Cameras operating in the thermal infrared (mid-wave and long-wave IR) use a cold stop that is designed to match the
exit pupil of the optics and thus avoid parasitic radiation or vignetting. For years, range operators have been using
reflective telescopes, usually with photo-documentation film cameras. Along with the need to shift operation into the
infrared comes a problem that (i) these telescopes do not have an exit pupil located at the IR camera cold stop, and
(ii) most IR cameras have f/2 or f/4 stop, while the telescope is typically f/7 or greater. These mismatches cause a
significant deterioration of the system performance and picture quality. A similar need arises when using zoom optics
with IR cameras where, as the field of view changes, so does the optics f/#, creating a mismatch with the camera that has
a fixed aperture. The OKSI/WSMR team has demonstrated two implementations of a patented continuous variable
aperture / cold stop (CVA/CS or VariAp®) for operating IR cameras with different f/# optics. Two systems were built:
(1) an optical relay assembly with an external CVA/CS, and (2) a custom 1024×1024 pixel MWIR camera with a built in
CVA/CS and the proper relay optics to match the telescope optics to the camera. The first optical relay with the
VariAp® is a retrofit for legacy IR cameras for operations with reflective telescopes. The camera with the built-in
VariAp® can function with both reflective (using an additional external relay) and refractive (with no additional relay)
telescopes. The paper describes the two systems that open new possibilities in IR imaging for various ranges.
This paper describes True-Color Night Vision cameras that are sensitive to the visible to near-infrared (V-NIR) portion
of the spectrum allowing for the "true-color" of scenes and objects to be displayed and recorded under low-light-level
conditions. As compared to traditional monochrome (gray or green) night vision imagery, color imagery has increased
information content and has proven to enable better situational awareness, faster response time, and more accurate target
identification. Urban combat environments, where rapid situational awareness is vital, and marine operations, where
there is inherent information in the color of markings and lights, are example applications that can benefit from True-Color Night Vision technology. Two different prototype cameras, employing two different true-color night vision
technological approaches, are described and compared in this paper. One camera uses a fast-switching liquid crystal
filter in front of a custom Gen-III image intensified camera, and the second camera is based around an EMCCD sensor
with a mosaic filter applied directly to the sensor. In addition to visible light, both cameras utilize NIR to (1) increase
the signal and (2) enable the viewing of laser aiming devices. The performance of the true-color cameras, along with the
performance of standard (monochrome) night vision cameras, are reported and compared under various operating
conditions in the lab and the field. In addition to subjective criterion, figures of merit designed specifically for the
objective assessment of such cameras are used in this analysis.
The incentive for the 4D-IS concept was driven by the need to adequately resolve all four dimensions of data (2D spatial, spectral, and temporal) with a single, radiometrically calibrated sensor. Very fast changing phenomena are of interest; including missile exhaust plumes, missile intercept events, and lightning strikes, hypervelocity impacts, etc. Present sensor capabilities are limited to imaging sensors (producing spatial image), spectrometers (that produce a mean signature over an entire field of view with no spatial resolution), radiometers (producing in-band radiance over an entire FOV), or imaging spectrometers (or hyperspectral sensors, tunable filter type, pushbroom scanning, imaging Fourier Transform, Fabry-Perot, or CTHIS type) that produce a data cube containing spatial/spectral information but suffer from the fact that the cube acquisition process may take longer time than the temporal scale during which the event changes. The Computer Tomography Imaging Spectrometer (CTIS) is another sensor capable of 4D data collection. However, the inversion process for CTIS is computationally extensive and data processing time may be an issue in real-time applications. Hence, the 4D-IS concept with its ability to capture a full image cube at a single exposure and provide real time data processing offers a new and enhanced capability over present sensors.
The 4D-IS uses a reformatter fiber optics to map a 2D image to a linear array that serves as an input slit to an imaging spectrometer. The paper describes three such instruments, a VNIR, a MWIR, and a dual band MW/LWIR. The paper describes the sensors' architecture, mapping, calibration procedures, and remapping the FPA plane into an image cube. Real-time remapping software is used to aid the operator in alignment of the sensor is described. Sample data are shown for rocket motor firings and other events.
Major spin-offs from NASA's multi- and hyper spectral imaging remote sensing technology developed for Earth resources monitoring, are creative techniques that combine and integrate spectral with spatial methods. Such techniques are finding use in medicine, agriculture, manufacturing, forensics, and an e er expanding list of other applications. Many such applications are easier to implement using a sensor design different from the pushbroom or whiskbroom air- or space-borne counterparts. This need is met by using a variety of electronically tunable filters that are mounted in front of a monochrome camera to produce a stack of images at a sequence of wavelengths, forming the familiar 'image cube'. The combined spectral/spatial analysis offered by such image cubes takes advantage of tools borrowed form spatial image processing, chemometrics and specifically spectroscopy, and new custom exploitation tools developed specifically for these applications. Imaging spectroscopy is particularly useful for non homogeneous samples or scenes. examples include spatial classification based on spectral signatures, use of spectral libraries for material identification, mixture composition analysis, plume detection, etc. This paper reviews available tunable filters ,system design considerations, general analysis techniques for retrieving the intrinsic scene properties from the measurements, and applications and examples.
Time critical search & rescue (s&r) operations often requires the detection of small objects in a vast area. While an airborne search can cover the area, no operational instrumental tools currently exist to actually replace the human operator. By producing the spectral signature of each pixel in a spatial image, multi- and hyper-spectral imaging (HSI) sensors provides a powerful capability for automated detection of subpixel size objects that are otherwise unresolved objects in conventional imagery. This property of HSI naturally lends itself to s&r operations. A lost hiker, skier, life raft adrift in the ocean, downed pilot or small aircraft wreckage targets, can be detected from relatively high altitude based on their unique spectral signatures. Moreover, the spectral information obtained allows the search craft to operate at substantially reduced spatial resolution thereby increasing scene coverage without a significant loss in detection sensitivity. The paper demonstrates the detection of objects as small as 1/10 of an image pixel from a sensor flying at over 6 km altitude. A subpixel object detection algorithm using HSI, based on local image statistics without reliance on spectral libraries is presented. The technique is amenable to fast signal processing and the requisite hardware can be built using inexpensive off the shelf technology. This makes HSI a highly attractive tool for real-time, autonomous instrument-based implementation. It can complement current visual-based s&r operations or emerging synthetic aperture radar sensors that are much more expensive.
Neural networks (NN) have been applied to hyperspectral image classification when traditional linear statistical classifiers have proven inadequate. The nonlinear and non- parametric properties of NN have often been cited for their apparent success. It has also been known that data preprocessing techniques such as principal component analysis (PCA) greatly improves classification accuracy. While PCA finds the axes of maximum variance in the data it does not guarantee increased separation between an arbitrary pair of classes. A transformation that is sensitive to class structure is obtained by solving the generalized eigenvalue problem of the amongst and within class covariance matrices of the data. Using this transformation, we demonstrate a case where the performance of linear statistical classifiers is comparable to that of NN classifiers for hyperspectral image classification.
The TIRIS is a pushbroom long wave infrared imaging spectrometer designed to operate in the 7.5 to 14.0 micrometer spectral region from an airborne platform, using uncooled optics. The focal plane array is a 64 by 20 extrinsic Si:As detector operating at 10 K, providing 64 spectral bands with 0.1 micrometer spectral resolution, and 20 spatial pixels with 3.6 milliradians spatial resolution. A custom linear variable filter mounted over the focal plane acts to suppress near field radiation from the uncooled external optics. This dual- use sensor is developed to demonstrate the detection of plumes of toxic gases and pollutants in a downlooking mode.
A novel feed forward neural network is used to classify hyperspectral data from the AVIRIS sensor. The network applies an alternating direction singular value decomposition technique to achieve rapid training times. Very few samples are required for training. 100 percent accurate classification is obtained using test data sets. The methodology combines this rapid training neural network together with data reduction and maximal feature separation techniques such as principal component analysis and simultaneous diagonalization of covariance matrices, for rapid and accurate classification of large hyperspectral images. The results are compared to those of standard statistical classifiers.
A methodology is described for an airborne, downlooking, longwave infrared imaging spectrometer based technique for the detection and tracking of plumes of toxic gases. Plumes can be observed in emission or absorption, depending on the thermal contrast between the vapor and the background terrain. While the sensor is currently undergoing laboratory calibration and characterization, a radiative exchange phenomenology model has been developed to predict sensor response and to facilitate the sensor design. An inverse problem model has also been developed to obtain plume parameters based on sensor measurements. These models, the sensors, and ongoing activities are described.
This paper reviews the activities at OKSI related to imaging spectroscopy presenting current and future applications of the technology. We discuss the development of several systems including hardware, signal processing, data classification algorithms and benchmarking techniques to determine algorithm performance. Signal processing for each application is tailored by incorporating the phenomenology appropriate to the process, into the algorithms. Pixel signatures are classified using techniques such as principal component analyses, generalized eigenvalue analysis and novel very fast neural network methods. The major hyperspectral imaging systems developed at OKSI include the intelligent missile seeker (IMS) demonstration project for real-time target/decoy discrimination, and the thermal infrared imaging spectrometer (TIRIS) for detection and tracking of toxic plumes and gases. In addition, systems for applications in medical photodiagnosis, manufacturing technology, and for crop monitoring are also under development.
Synthetic hyperspectral signatures representing an airborne target engine radiation, a decoy flare, and the engine plume radiation are used to demonstrate computational techniques for the discrimination between such objects. Excellent discrimination is achieved for a `single look' at SNR of -10 dB. Since the atmospheric transmittance perturbs the signature of all objects in an identical fashion, the transmittance is equivalent to a modulation of the target radiance (in the spectral domain). The proper spectral signal decomposition may, therefore, recover the original unperturbed signature accurately enough to allow discrimination. The algorithms described here, and in two accompanying papers, have been tested over the spectral range that includes the VNIR and MWIR and are most appropriate for an intelligent, autonomous, air-to-air or surface-to-air guided munitions. With additional enhancements, the techniques apply to ground targets and other dual-use applications.