Open Access
23 September 2013 Review of snapshot spectral imaging technologies
Author Affiliations +
Abstract
Within the field of spectral imaging, the vast majority of instruments used are scanning devices. Recently, several snapshot spectral imaging systems have become commercially available, providing new functionality for users and opening up the field to a wide array of new applications. A comprehensive survey of the available snapshot technologies is provided, and an attempt has been made to show how the new capabilities of snapshot approaches can be fully utilized.

1.

Introduction

Spectral imaging sensors sample the spectral irradiance I(x,y,λ) of a scene and thus collect a three-dimensional (3-D) dataset typically called a datacube (see Fig. 1). Since datacubes are of a higher dimensionality than the two-dimensional (2-D) detector arrays currently available, system designers must resort to either measuring time-sequential 2-D slices of the cube or simultaneously measuring all elements of the datacube by dividing it into multiple 2-D elements that can be recombined into a cube in postprocessing. These two techniques are described here as scanning and snapshot.

Fig. 1

The portions of the datacube collected during a single detector integration period for (a) scanning and (b) snapshot devices.

OE_52_9_090901_f001.png

The use of imaging spectrometers was rare before the arrival of 2-D CCD arrays in the 1980s, but steadily grew as detector technology advanced. Over the following 30 years, better optical designs, improved electronics, and advanced manufacturing have all contributed to improving performance by over an order of magnitude since that time. But the underlying optical technology has not really changed. Modified forms of the classic Czerny-Turner, Offner, and Michelson spectrometer layouts remain standard. Snapshot spectral imagers, on the other hand, use optical designs that differ greatly from these standard forms in order to provide a boost in light collection capacity by up to three orders of magnitude. In the discussion below, we provide what we believe is the first overview of snapshot spectral imaging implementations. After providing background and definitions of terms, we present a historical survey of the field and summarize each individual measurement technique. The variety of instruments available can be a source of confusion, so we use our direct experience with a number of these technologies [computed tomography imaging spectrometer (CTIS), coded aperture snapshot spectral imager (CASSI), multiaperture filtered camera (MAFC), image mapping spectrometry (IMS), snapshot hyperspectral imaging Fourier transform (SHIFT) spectrometer, and multispectral Sagnac interferometer (MSI)—each described in Sec. 4 below] to provide comparisons among them, listing some of their advantages and disadvantages.

1.1.

Definitions and Background

The field of spectral imaging is plagued with inconsistent use of terminology, beginning with the field’s name itself. One often finds spectral imaging, imaging spectrometry (or imaging spectroscopy), hyperspectral imaging, and multispectral imaging used almost interchangeably. Some authors make a distinction between systems with few versus many spectral bands (spectral imaging versus imaging spectrometry), or with contiguous versus spaced spectral bands (hyperspectral versus multispectral imaging). In the discussion below, we use spectral imaging to refer simply to any measurement attempting to obtain an I(x,y,λ) datacube of a scene, in which the spectral dimension is sampled by more than three elements. In addition, we use the term snapshot as a synonym for nonscanning—i.e., systems in which the entire dataset is obtained during a single detector integration period. Thus, while snapshot systems can often offer much higher light collection efficiency than equivalent scanning instruments, snapshot by itself does not mean high throughput if the system architecture includes spatial and/or spectral filters. When describing a scene as dynamic or static, rather than specifying the rate of change in absolute units for each case, we simply mean to say that a dynamic scene is one that shows significant spatial and/or spectral change during the measurement period of the instrument, whether that period is a microsecond or an hour. Since snapshot does not by itself imply fast, a dynamic scene can blur the image obtained using either a snapshot or a scanning device, the difference being that whereas motion induces blur in a snapshot system, in a scanning system, it induces artifacts. In principle, blurring and artifacts are on a similar footing, but in practice one finds that artifacts prove more difficult to correct in postprocessing.

When describing the various instrument architectures, “pixel” can be used to described an element of the 2-D detector array or a single spatial location in the datacube (i.e., a vector describing the spectrum at that location). While some authors have tried introducing “spaxel” (spatial element) to describe the latter,1 this terminology has not caught on, so we simply use “pixel” when describing a spatial location whose spectrum is not of interest, and “point spectrum” when it is. While many authors refer to the spectral elements of a datacube as bands, we use “channel” to refer to individual spectral elements and reserve the use of bands for broad spectral regions [such as the visible band or the longwave IR (LWIR) band]. It is useful to have a terminology for a single horizontal plane of the datacube (the image taken over a single spectral channel), so we refer to this as a “channel image”. A single element of the datacube is referred to as a “voxel” (volume element). When describing the dimensions of the datacube, we use Nx, Ny, and Nw as the number of sample elements along the (x,y) spatial and λ spectral axes, respectively, so that the total number of datacube elements is given by NxNyNw. We try to avoid describing the datacube in terms of resolution, since many authors use this to refer to the number of sampling elements, while others refer to it as the number of resolvable elements. At the Nyquist sampling rate, these two differ by a factor of two, but if one always refers to the number of samples, then the meaning is clear. One can also note that it is problematic to talk about a single value for resolution when discussing computational sensors, since the number of resolvable elements for these systems varies with the scene—some scenes are easier to resolve than others.

For time-resolved (video imaging) systems, the data dimensions assume the form (Nx, Ny, Nw, Nt), where Nt is the number of frames captured during a video sequence. We refer to this dataset as a “hypercube”.

The amount of light collected by a given instrument is an important quantity, so we often refer to a given sensor’s throughput or efficiency. Whereas the former refers specifically to the AΩ product (or “étendue”), the latter is a ratio of the sensor’s throughput with respect to a reference sensor that can collect light from the entire datacube during the full measurement period and that also has ideal quantum efficiency. Whereas the ideal instrument for any given application always has an efficiency of 1, its throughput varies with different applications.

Also, many authors make a distinction between spectrometer, spectrograph, and spectroscope, with distinctions varying among researchers. We make no distinction here and generally stick to using spectrometer, except where this clashes with a given field’s nomenclature.

2.

Historical Overview

As in so much of optics, one can trace the beginnings of spectral imaging back into the nineteenth century, where one finds the astronomer P. J. C. Janssen using a double-slit monochromator to view the solar corona.2,3 The double-slit monochromator (at the time termed a spectroscope, or in this case a spectrohelioscope) was the only means of obtaining narrow-band spectra, and an image was obtained by spinning the device rapidly while viewing the exit slit with the eye. By adjusting the relative position of the exit slit with respect to the prism dispersion, one could thus view the same scene at different wavelengths. Although an important4 and clever setup, it was regarded by other researchers as clumsy.5 Some three decades later, Fabry and Perot developed their interferometric filter, which for the first time allowed astronomers to both view a full scene over a narrow spectral band and tune the filter wavelength.68 The tunable filter thus represented an important advance, giving scientists access to information that was previously difficult to obtain. This allowed them to build (x,y,λ) representations of the object in view, albeit laboriously.

An additional advantage of the Fabry–Perot interferometer was that it allowed a much higher light throughput than the double-slit monochromator, enabling users to view dimmer objects. This opened up a number of discoveries, but, as a spectral imager, it still suffered from two problems that simple cameras do not: motion artifacts and poor light collection efficiency. These two issues have plagued the field ever since. In order to overcome these problems, astronomers began looking for nonscanning instruments that could obtain the full 3-D dataset in a single measurement period—snapshot spectral imagers. The first published example of a snapshot spectral imaging system was not far behind. Bowen developed his image slicer9 by placing a series of tilted mirrors to slice the image into thin strips and then translate each strip to form a single long slit. Walraven later took this concept and created a design that was easier to manufacture, using only a thick glass plate (with a 45 deg angle cut into one end) cemented to a wedge-cut prism.10 The resulting device is shown in Fig. 2. Once a beam of light enters the glass plate, the beam reflects due to total internal reflection, except in those regions where the plate meets the prism edge, where the beam transmits. The succession of partial reflections and partial transmissions transforms the input beam into a long slit, which can then be used as the input to a slit spectrometer. Over the next few decades after Bowen’s initial invention, a number of astronomers adapted this slicing technique to telescopes around the world,1114 but the method still provided only modest gains over existing instrumentation. It was with the 3-D instrument on the William Herschel Telescope that the image slicing technique provided a large leap forward in performance, allowing for a snapshot measurement of a 16×16×2100 datacube (5.4×105voxels).15 For the first time, precision manufacturing, large-format detector arrays, and computers had advanced to the point that snapshot approaches could display capabilities going well beyond their scanning counterparts.

Fig. 2

Various views of a Bowen-Walraven image slicer, illustrating how the glass plate and wedge-cut prism combine to slice the optical beam into a long slit. Shapes shown in yellow indicate the light passing through the slicer; the beam reflecting within the top plate is not shown for clarity. (Adapted from Fig. 1 of Ref. 16.)

OE_52_9_090901_f002.png

During this development, in 1958 Kapany introduced the concept of placing a coherent fiber bundle at the image plane and then reformatting the fiber output into a long thin line for easy adaptation to one or more slit spectrometers.17 But it appears not to have been implemented until 1980, when such a device was developed for the Canada-France-Hawaii telescope (CFHT) on Mauna Kea.18 Adaptations by other astronomers soon followed.19,20

A third snapshot spectral imaging technique was developed by Courtes in 1960, in which a lenslet array is used to create an array of demagnified pupils.21,22 These pupils fill only a small portion of the available space at the image, so with the proper geometry one can reimage them through a disperser to fill in the unused space with dispersed spectra. This concept was adapted to the CFHT and data began to be collected in 1987, producing datacubes with Nx×Ny=271 spatial elements and up to Nw=2200 spectral samples.23

These three techniques for snapshot spectral imaging—image slicing, fiber reformatting, and lenslet-based pupil array dispersion—are now widely described in astronomy as integral field spectroscopy (IFS), so we label the three techniques here as IFS-M, IFS-F, and IFS-L. The integral spectroscopy nomenclature appears to originate from the dissertation work of Chabbal,24 but the first publication in which one can find it used in the title is Ref. 25.

Outside of astronomy, one finds similar uses of slit spectrometers and tunable filter cameras, but the vast majority of uses did not require imaging. These include measurements such as mapping the spectrum of atomic emission/absorption lines, measuring the chemical composition of the atmosphere,26 and measuring the quantity of physical compounds.27 Outside of astronomy, the coupling of imaging with spectrometry did not take hold until much later, with the beginning of airborne remote sensing. Here spectral imaging was first used for agricultural assessment and management around 1966 (Refs. 28 and 29) and received a large boost with the launch of the Landsat remote sensing satellite in 1972.30 With the launch of Landsat also came the development of field-portable spectral imaging instruments needed to calibrate Landsat’s measurements. As spectral imaging became more widely used, researchers faced the same obstacles of scanning artifacts and poor light throughput as those faced by the astronomers, so that they too began to explore new methods, leading to a variety of new instruments. As single-pixel detectors gave way to linear and then 2-D detector arrays, system design options expanded, allowing for new approaches. The first of these new approaches derived from the natural idea of using multiple beamsplitters, in which the beam is split into independent spectral channels, with each channel directed to an independent camera. While this was a common choice for imaging in three spectral bands, especially for wideband measurements (such as visible plus near-IR), it did not take hold for more than four spectral channels. Hindrances included the difficulty of making large beamsplitters of high-enough quality and the limited ability to reduce the bulk and weight of the resulting system. With the increasing availability of thin-film filters, another natural choice involved using an array of filters coupled to an array of lenses. This, too, did not progress far, perhaps because of the difficulty of making large arrays of lenses with sufficient quality and correction for parallax effects. (Ref. 31 comments that the first good thin film filters became available in the late 1930s. So we can expect that they did not become commercially available until the 1940s or 1950s.)

With the realization of advanced computers, the option of a computational sensing32 approach became feasible. The first of these new approaches was a computational sensor later named the CTIS. This device used a 2-D disperser to project a spectrally dispersed scene directly onto a detector array, allowing the spectral and spatial elements to multiplex. (One can experience the exact same thing by donning one of the many diffraction grating glasses that are given out at baseball games and fireworks displays.) The resulting data on the detector array are equivalent to tomographic projections of the datacube taken at multiple view angles, so that tomographic reconstruction techniques can be used to estimate the 3-D datacube from the set of 2-D projections. A compressive sensing approach33 to snapshot spectral imaging was also developed, promising to allow snapshot measurement of datacubes with more voxels than there are pixels on the detector array.34 As of this writing, there is still a significant amount of research that needs to be done with these computational techniques, as they have not yet shown performance that can compete with their noncomputational counterparts.35,36

3.

Scanning Imaging Spectrometer Architectures

In parallel with the development of snapshot methods, scanning techniques for spectral imaging also advanced. The development of linear digital sensor arrays allowed researchers to collect light from a number of voxels simultaneously. This involved imaging a linear spatial region through a sequential set of spectral filters or, more commonly, by collecting light over a full set of spectral channels while imaging one spatial location and then scanning in two dimensions over the full field of view—a point scanning spectrometer. However, unless used with very wide spectral channels (such as on the Landsat satellites), the poor light collection of these devices often made it difficult to use these instruments outside the lab.

Once 2-D detector arrays became available in the 1980s, these were rapidly adopted in imaging spectrometers, providing a large boost in light collection capacity.37 For the first time, researchers had available systems that could collect light emitted by thousands (and eventually millions) of voxels simultaneously. As array detectors advanced in size and performance, instrument designers took advantage of the new capability by increasing spatial and spectral resolution. Using a configuration in which the 2-D detector array is mapped to a spectral plus one-dimensional spatial (x,λ) dataset—a pushbroom spectrometer—made possible the first imaging spectrometers without moving or electrically tuned parts. If the sensor is placed on a steady-moving platform such as a high-flying aircraft, an Earth-observing satellite, or the object moves on a constant-motion conveyor, the second spatial dimension of the cube is obtained simply by scene motion across the instrument’s entrance slit. The removal of moving parts greatly improved the robustness of these devices and reduced their overall size, allowing for image spectrometry to develop into a mainstream technology for Earth-observing remote sensing. Other architectures are also available for scanning systems, as summarized below. A good review of these architectures and their relative SNRs is Ref. 38. Other surveys of scanning-based approaches include Refs. 39, 40, and 41. (Note that Ref. 40’s snapshot is equivalent to what we refer to here as a wavelength-scanned system or filtered camera.)

3.1.

Point Scanning Spectrometer

The input spectrum is dispersed across a line of detector elements, allowing for very fast readout rates. The scene is scanned across the instrument’s input aperture with the use of two galvo mirrors (or just one if the instrument platform is itself moving), allowing for collection of a full 3-D datacube.

3.2.

Pushbroom Spectrometer

The input aperture is a long slit whose image is dispersed across a 2-D detector array, so that all points along a line in the scene are sampled simultaneously. To fill out the spatial dimension orthogonal to the slit, the scene is scanned across the entrance aperture. This can take the form of objects moving along a conveyor belt, the ground moving underneath an airborne or spaceborne platform, or the scene scanned across the entrance slit by a galvo mirror.

3.3.

Tunable Filter Camera

A tunable filter camera uses an adjustable filter (such as a filter wheel) or an electrically tunable filter, such as a mechanically tuned Fabry–Perot etalon,42,43 a liquid-crystal tunable filter (LCTF),44 and an acousto-optic tunable filter (AOTF).45 Response/switching times of the various approaches range from 1s for the filter wheel, to 50 to 500 ms for the LCTF and mechanically tuned Fabry–Perot, and to 10 to 50 μs for the AOTF.

3.4.

Imaging Fourier Transform Spectrometer

An imaging Fourier transform spectrometer scans one mirror of a Michelson interferometer in order to obtain measurements at multiple optical path difference (OPD) values—the Fourier domain equivalent of a tunable filter camera.46,47 A more recent alternative method here is the birefringent Fourier-transform imaging spectrometer developed by Harvey and Fletcher-Holmes, which has the advantage of being less vibration sensitive due to its common path layout.48

3.5.

Computed Tomography Hyperspectral Imaging Spectrometer

This is a scanning device closely related to the CTIS snapshot technique mentioned above and has the advantage of having a conventional disperser design and of being able to collect many more projection angles so that the reconstructed datacube has fewer artifacts. Its main disadvantage is that the detector is not used efficiently in comparison to alternative methods.49

3.6.

Coded Aperture Line-Imaging Spectrometer

Although coded aperture spectrometry began as a method of scanning a coded aperture across the entrance slit of a conventional dispersive spectrometer, in Refs. 50 and 51 it was adapted to modern 2-D detector arrays, allowing for improved SNR at the cost of using larger pixel count detector arrays.

Each of these architectures uses passive measurement methods, in which no control is required over the illumination in order to resolve the incident light spectrum. Several of these techniques can be used in reverse to produce spectrally encoded illumination systems. In this approach, the object is illuminated with a well-defined spectral pattern, such that the imaging side no longer needs to have spectral resolution capability. For example, one can illuminate a scene with a broadband light source transmitted through a tunable filter and time the image acquisition to coincide with steps in the filter wavelength to produce a datacube of light emitted or reflected by the scene.

Finally, when comparing scanning and snapshot devices, we can note that the division between the two is not as black and white as one might expect. For example, designers have produced sensor architectures that mix both snapshot and scanning techniques, so that the number of scans required to gather a complete dataset is significantly reduced. The earliest example of this of which we are aware (although it seems likely that astronomers had tried this well before this time), is a patent by Busch,52 where the author illustrates a method for coupling multiple optical fibers such that each fiber is mapped to its own entrance slit within a dispersive spectrometer’s field of view. More recently, we can find examples such as Chakrabarti et al., who describe a grating spectrometer in which the entrance slit is actually four separate slits simultaneously imaged by the system.53 The respective slits are spaced apart such that the dispersed spectra do not overlap at the image plane. This setup can be used to improve light collection by a factor of four, at the expense of either increasing the detector size or reducing the spectral resolution by a factor of four. Ocean Optics’ SpectroCam is another example of a mixed approach, in which a spinning filter disk is combined with a pixel-level spectral filter array (more detail on the latter is given in Sec. 4.8) to improve the speed of multispectral image acquisition.

In addition, the fact that a given instrument is snapshot does not in itself imply that the device is fast. Scanning devices can have very short measurement times, and snapshot devices can potentially have very long ones. The essential difference is that snapshot devices collect data during a single detector integration period, and whether this is short or long depends on the application. For large-format snapshot spectral imagers in particular, the frame readout rate can also be rather long in comparison to the exposure time, so that a video sequence (or hypercube) can be time-aliased due to poor sampling if the two rates are not forced to be better matched.

4.

Snapshot Spectral Imaging Technologies

Before attempting a comparison of existing methods for performing snapshot imaging spectrometry, we go through the menagerie of instrument architectures and attempt to summarize the basic measurement principles of each one in turn. Previous surveys (such as Refs. 54 and 55) have focused on instruments developed for astronomy, while we attempt here to discuss all of the techniques of which we are aware. In order to describe the various instruments, there is an inevitable flood of acronyms. A selection of these are summarized in Table 1, together with the section discussing each, the figure illustrating a typical system layout, and the first reference to the technique.

Table 1

Location of summaries discussing each technology.

NameSectionFigureDateReference
IFS-M4.1319389
IFS-F4.24195817
IFS-L4.35196021
MSBS4.46197856
CTIS4.57199157
MAFC4.68199458
TEI4.79200059
SRDA4.810200160
IRIS4.911200361
CASSI4.1012200734
IMS4.1113200962
SHIFT4.1214201063
MSI4.1315201064

4.1.

Integral Field Spectrometry with Faceted Mirrors (IFS-M, 1938)

In astronomy, the most common approaches to snapshot imaging spectrometry are the integral field techniques (based on mirror arrays, fiber arrays, and lenslet arrays: IFS-M, IFS-F, and IFS-L)—so called because each individual measurement of a datacube voxel results from integrating over a region of the field (the object). IFS-M was the first snapshot spectral imaging technology to emerge, beginning with Bowen’s image slicer. As originally conceived, this device was both difficult to manufacture (due to precision alignment of an array of small mirrors) and offered only a modest gain of 5× over existing slit spectrometers. In 1972, Walraven found a way to modify the design to use a prism-coupled plane parallel plate in place of the mirror array.10,16 This made the device much easier to align and assemble, but it was still not very widely used, partly because its use was primarily limited to slow beams with f-numbers >30.65

It was not until the “3-D” instrument was completed on the CFH telescope that image-slicer-type IFS could point to results that were difficult to obtain with existing instruments but that were readily obtained with the 3-D: performing spectroscopy and mapping the spatial distributions of dim extended objects (such as nebulae or distant galaxies).15 Figure 3 shows a view of the image slicer subsystem for an IFS-M. The 3-D instrument design, however, had several limitations: in order to keep the pupils separated from one another, the image slicer’s facet tilts had to be large, thus inducing some defocus at the ends of the facet. Furthermore, it was difficult to optimize the volume to increase the number of facets and therefore the number of spatial elements in the datacube. In 1997, Content showed that allowing the microfacets to have curvature greatly reduces these constraints, allowing the designer to reduce the facet tilts and to place the pupils closer together—an approached he named “advanced imaging slicing.”66,67 While adding curvature eases system design constraints, it substantially complicates manufacture of the slicing mirror, and researchers began a period of investigating how to improve manufacturing techniques.68

Fig. 3

The system layout (a) for an integral field spectroscopy with faceted mirrors (IFS-M), and closeup (b) of the slicer mirror. For clarity, the layout only shows the chief rays corresponding to each mirror facet, and the spectrometer optics behind each pupil have been omitted. (These back-end optics are located behind each pupil in the array and include a collimating lens, disperser, reimaging lens, and detector array. If the mirror facets are given curvature, then the collimating lens is unnecessary.)

OE_52_9_090901_f003.png

Because of its all-mirror approach, the IFS-M technique is well suited to measurements in the IR. It was also known that although scatter induced by manufacturing artifacts, and by diffraction at facet edges, was a serious problem at visible wavelengths, it was much less so in the near-IR and shortwave IR, and the image slicing method has been shown to excel in these spectral bands.69 As confidence in manufacturing techniques increased, Content later introduced the concept of microslicing (or IFS-μ), a technique that combines design elements of IFS-M with IFS-L. This enabled one to measure many more spatial elements in the datacube, at the expense of reduced spectral sampling.70 The basic idea of microslicing is to use the same slicing mirror as IFS-M, but with larger facets. The slicer allows the various strips in the image to be physically separated, and each is then passed through an anamorphic relay, such that one axis is stretched. This gives some extra space so that further down the optical path, the stretched image is relayed through an anamorphic lens array, which simultaneously unstretches and samples the image. The result is a format in which the various spatial elements of the image take the form of narrow strips that are sufficiently separated at the detector array that they can be spectrally dispersed without adjacent strips overlapping. The concept requires a mechanically elaborate design, but promises to achieve an impressive datacube size of 1200×1200×600 or 1500×1500×200.70

4.2.

Integral Field Spectrometry with Coherent Fiber Bundles (IFS-F, 1958)

With the invention of coherent fiber bundles, it was quickly realized that one can image through the circular bundle on one end and squeeze the other end into a long thin line (creating what Kapany describes as a light funnel) to fit into the long entrance slit of a dispersive spectrometer.17 It was quite some time after this, however, that a working instrument was deployed. Instead, one first finds examples in the literature of systems that manipulate single fibers to positions within the field of view—for example, placing one fiber at each of a number of stars within the image.71 Rather than spectral imaging per se, this is better termed multiobject spectroscopy. Allowing true spectral imaging through a fiber bundle required first overcoming several hurdles, the first of which was manufacturability. The process of reformatting the exit face of the fiber bundle into a long slit generally produced a lot of broken fibers, and replacing them is laborious. Another drawback with the use of fibers was that it could be quite difficult to couple light efficiently into them, so that a significant amount of light was lost at the incident face of the fiber. Moreover, if the cladding of the fibers is insufficiently thick, the fibers show significant crosstalk, which quickly degrades the measurement quality. Finally, earlier fiber-based systems were also restricted by the limited spectral range transmitted by available fibers. Improvements in manufacturing and assembly techniques have steadily reduced each of these problems. The use of multimode fibers increased coupling efficiency,72 as did the use of lenslets registered to each fiber so that the light is focused onto the fiber cores and not in regions with poor coupling efficiency. (Lee et al. mention that these fiber-coupling lenslets should be used at f/4 to f/6.73) In addition, multimode fibers are also more robust, so that it became easier to format them into a line without breaking. With these and other advances, fiber-based IFS systems were successfully deployed on telescopes.74 Figure 4 shows an example layout for an IFS-F system.

Fig. 4

The system layout for an integral field spectrometer with coherent fiber bundles (IFS-F): the object is imaged onto the face of a coherent fiber bundle. At the opposite end of the bundle, the fibers are splayed out (reformatted) into a linear array, which is compatible with the input of a standard slit spectrometer. At the input and output faces of the fiber bundle, there may be lenslets coupled to each fiber in order to improve light throughput.

OE_52_9_090901_f004.png

A drawback with the fiber-based approach, it was learned, was that the light emerging from the exit face of the fiber was always faster (lower f-number) than it was on the input face—a phenomenon that was termed focal ratio degradation.19,69,75 Another phenomenon that was discovered when operating fiber-based systems at high spectral resolution was modal noise.76,77 In the midst of all of these developments, it was realized that the fiber-based approach also allowed astronomers to do away with the free-space dispersive spectrometers that have been the mainstay of so much of astronomical spectroscopy. If one can make use of components developed in the photonic industry, which has optimized miniaturization and manufacturing efficiency, then it should be possible to minimize instrument size and cost in modern astronomical telescopes.78 Since photonic components are designed primarily for single-mode inputs, this approach first required the development of a device that can convert a single multimode input into a series of single-mode outputs—what has been called a photonic lantern.79 One of the great advantages of this integrated photonic spectrograph approach80 is that it also enables the ability to use high-precision optical Bragg filters to filter out unwanted light from the atmosphere (often called OH suppression).81

In parallel with the developments in astronomy, advances in the technology also made it possible to build a commercial spectral imager based on the same concepts. The first nonastronomical instrument of which we are aware is that of Matsuoka et al., which delivered datacubes of 10×10×151 dimensions at 0.2 fps.82 This was followed by systems developed by other research groups.8387

4.3.

Integral Field Spectroscopy with Lenslet Arrays (IFS-L, 1960)

The first discussion of using lenslet arrays for integral field spectroscopy appears to be that of Courtes in 1960, in which he proposes to use a lenslet array placed at the telescope’s focal plane. Such a configuration generates an array of pupil images—each mapped to a different field position.21,88 (The first review of this concept in English appears to be Meaburn’s in 1975, in what he calls an insect-eye Fabry–Perot spectrograph.89) This is the basis of the lenslet-based integral field approach. The lenslet array, placed near the image plane, creates a series of pupil images—one for each lenslet. Since the lenslets are at or near an image plane, each pupil image comprises all of the light integrated across the field positions corresponding to the spatial extent of the lenslet. The advantage here is that one can allow the lenslets to create faster beams (with lower f-number) than the original input, so that void space is created between the pupil images. One can then take advantage of this space by dispersing the light across it, allowing for detection of the spectrum. Figure 5 shows an example layout for an IFS-L system.

Fig. 5

The system layout for an integral field spectrometer with lenslet arrays (IFS-L).

OE_52_9_090901_f005.png

The modern form of the IFS-L was first presented by Courtes in 1980,22,23 but the first published data from an instrument did not follow until 1988, when Courtes et al. present a system providing datacube dimensions of 44×35×580.90 As the concept spread, a number of other astronomers began creating designs for different telescopes.73,91,92 Borrowing from terminology used in fiber-based integral field spectrometry, one difficulty with the lenslet approach is focal ratio degradation. The beam behind the lenslet array must have a smaller f-number than the beam in front of it, placing higher étendue requirements on the back-end optics. One way of mitigating this issue is to use pinholes in place of or in tandem with the lenslet array.93 The tradeoff in doing this, of course, is that by spatial filtering one is reducing the system’s light throughput. In fact, if one replaces the lenslets with pinholes (so that one is sampling field positions rather than integrating across them), then the light throughput of the system becomes no better than a scanning approach.

While the IFS-L technique began in astronomy, its success brought it outside notice, and it was eventually adapted to other spectral imaging applications, with the first publication demonstrating a system achieving datacube dimensions of 180×180×20, measured at 30 fps and f/1.8 using a 1280×1024 CCD.94,95

4.4.

Multispectral Beamsplitting (MSBS, 1978)

The idea of using multiple beamsplitters for color imaging has been around for quite some time.56 In this setup, three cemented beamsplitter cubes split incident light into three color bands, with each band observed by independent cameras [see Figs. 6(a) and 6(b)].96 While one can change the beamsplitter designs to adjust the measured spectral bands, it is not easy to divide the incident light into more than four beams without compromising the system performance. (Murakami et al., for example, limit themselves to four beamsplitters and attempt to use filters to increase the number of spectral channels.97) Thus, four spectral channels appear to be the practical limit of this approach. A closely related method is to use thin-film filters instead of the bulkier beamsplitter cubes/prisms to split the light,98 but this approach is still probably limited to about five or six spectral channels due to space limitations and cumulative transmission losses through successive filters. The space limitation can be overcome by using a single stack of tilted spectral filters operating in double-pass, which allows for the entire set of spectral images to be collected on a single detector array.99,100 (This is an approach we have previously termed filter stack spectral decomposition.101) Although more compact than the previous methods, since the filters are now operating in double-pass mode, transmission losses are doubled as well, so this method is limited to Nw<6.

Fig. 6

System layouts for four different multispectral bemsplitter (MSBS) implementations, using (a) monolithic beamsplitter blocks, (b) a sequence of spectral filters/beamsplitters, (c) a volume hologram optical element splitter, and (d) a stack of tilted spectral filters (or filter stack spectral decomposition).

OE_52_9_090901_f006.png

A fifth implementation is to perform spectral splitting with a volume holographic element (VHE). Matchett et al. have shown that they can manufacture a VHE to split an incident beam in three, with each of the three spectrally filtered beams reflected at different angles.102 The VHE has the advantage of being a compact element with good reflection efficiency over a reasonable range of field angles. But it appears to be difficult to design the VHE for more than three channels. For example, Matchett et al. divided the system pupil in four, using a different VHE for each of the four sections, in order to produce a system that can measure 12 spectral channels. Matchett et al. state that this system has achieved 60 to 75% throughput across the visible spectrum.

4.5.

Computed Tomography Imaging Spectrometry (CTIS, 1991)

As with every other snapshot spectral imaging technology, CTIS can be regarded as a generalization of a scanning approach—in this case a slit spectrometer. If one opens wide the slit of a standard slit spectrometer, spectral resolution suffers in that spatial and spectral variations across the width of the slit become mixed at the detector. However, if instead of a linear disperser one uses a 2-D dispersion pattern, then the mixing of spatial and spectral data can be made to vary at different positions on the detector. This allows tomographic reconstruction techniques to be used to estimate the datacube from its multiple projections at different view angles. Figure 7 shows the CTIS system layout. The CTIS concept was invented by Okamoto and Yamaguchi57 in 1991 and independently by Bulygin and Vishnyakov103 in 1991/1992, and was soon further developed by Descour, who also discovered CTIS’s missing cone problem.35,104,105 The instrument was further developed by using a custom-designed kinoform disperser and for use in the IR bands.106 The first high-resolution CTIS, however, was not available until 2001, providing a 203×203×55 datacube on a 2048×2048 CCD camera.107 Although the CTIS layout is almost invariably shown using a transmissive disperser, Johnson et al. successfully demonstrated a reflective design in 2005.108

Fig. 7

The system layout for a computed tomography imaging spectrometer (CTIS).

OE_52_9_090901_f007.png

A major advantage of the CTIS approach is that the system layout can be made quite compact, but a major disadvantage has been the difficulty in manufacturing the kinoform dispersing elements. Moreover, since its inception, CTIS has had to deal with problems surrounding its computational complexity, calibration difficulty, and measurement artifacts. These form a common theme among many computational sensors, and the gap they create between ideal measurement and field measurements forms the difference between a research instrument and a commercializable one. While CTIS has shown a lot of progress on bridging this gap, it has not shown the ability to achieve a performance level sufficient for widespread use.

4.6.

Multiaperture Filtered Camera (MAFC, 1994)

An MAFC uses an array of imaging elements, such as an array of cameras or a monolithic lenslet array, with a different filter placed at each element in order to collect portions of the full spectral band (see Fig. 8). This first MAFC implementation of which we are aware is the Fourier transform spectrometer approach by Hirai et al.58 Surprisingly, it was not until 2004 that we found an implementation like that shown in Fig. 8a, by Shogenji et al.,109 after which we find other research groups following the same approach.110,111 This layout uses an array of lenses coregistered to an array of spectral filters, with the entire set coupled to a monolithic detector array. (Note that the SHIFT system described in Sec. 4.12 describes a similar, but filterless, Fourier-domain approach.)

Fig. 8

Layouts for a multiaperture filtered camera (MAFC) system, using (a) the Shogenji design, (b) IMEC112 design, or (c) the Levoy/Horstmeyer design. (The color balls image is from http://commons.wikimedia.org/wiki/File:Toy_balls_with_different_Colors.jpg#.)

OE_52_9_090901_f008.png

Another approach was first suggested by Levoy113 and implemented by Horstmeyer et al.114 This involves adapting a light field camera with a pupil plane filter: a lenslet array is placed at the objective lens’ image plane, so that the detector array lies at a pupil plane (as imaged by the lenslets). The image behind each lenslet is an image of the filter array, modulated by the scene’s average spectral distribution across the lenslet. While more complex and less compact than the Shogenji design, this second layout has the distinct advantage of being able to use a variety of objective lenses, so that zooming, refocusing, and changing focal lengths are easier to achieve. Mitchell and Stone developed a similar technique in which the linear variable filter is placed at the lenslet plane.115

While the MAFC is arguably the most conceptually simple approach to multispectral imaging, it does place some requirements on the scene’s light distribution in order to work well. For finite-conjugate imaging, it requires that the object irradiance is reasonably uniform in angle, so that the individual filters and lenslets sample approximately the same relative light distribution as do all of their counterparts. Specular objects are thus problematic at finite conjugates, and angular variations in irradiance will be mistakenly measured as spectral variations.

4.7.

Tunable Echelle Imager (TEI, 2000)

The tunable echelle imager (TEI) can be considered as a modification of an echelle spectrometer to allow imaging. To make this possible, a Fabry-Perot etalon is placed into the optical train so that the input spectrum is sampled by the Fabry-Perot’s periodic transmission pattern.59,116 This produces gaps in the spatial pattern of the dispersed spectrum, allowing one to fill the gaps with a 2-D image (see Fig. 9). The light transmitted by the etalon is passed into a cross-disperser (for example, a grating whose dispersion is out of the plane of the page as shown in Fig. 9 and then into an in-plane disperser). The result is a characteristic 2-D echelle dispersion pattern, where the pattern is no longer composed of continuous stripes, but rather a series of individual images, each one of which is a monochromatic slice of the datacube (i.e., an image at an individual spectral channel). Under assumptions that the spectrum is smooth (i.e., bandlimited to the sampling rate of the instrument), this achieves a snapshot measurement of the datacube. However, the main tradeoff is that the system throughput is quite low: not only does the etalon reflect most of the input light, but the crossed-grating format is also inefficient. Moreover, for cases in which the object’s spectrum does not satisfy the bandlimit assumptions, the measurements are prone to severe aliasing unless scanning is used to measure the gaps in the spectral data.

Fig. 9

The system layout for the tunable echelle imager (TEI). The box numbers in the raw data simulation shown here indicate wavelengths of subimages in nm; the dashed boxes indicate a replicate order of the Fabry-Perot etalon. (Figure adapted from Figs. 3 and 6 of Ref. 59.)

OE_52_9_090901_f009.png

4.8.

Spectrally Resolving Detector Arrays (SRDA, 2001)

With the development of Bayer filter array cameras in the late 1970s, it became possible to produce single-pixel-level spectral filtering.117 The generalization from color imaging to multispectral imaging by increasing the number of filters is a small step (see Fig. 10), and there have been numerous such proposals.60,97,118126 The resulting instruments are extremely compact, since all of the spectral filtering is performed at the detection layer, but for several reasons this method has not been widely accepted in the spectral imaging community. The primary reasons are undoubtedly that manufacturing these pixel-level filters is difficult and that each pattern is highly specific, so that one cannot easily adjust the system in order to change spectral range or resolution. The fact that the multispectral approach will generally want to use detector arrays with higher pixel counts exacerbates the manufacturability problem. But another drawback is that, as with any filtering technique, an increase in spectral resolution produces a corresponding loss in light throughput.

Fig. 10

The system layout for a pixel-level filter array camera (one implementation of SRDA).

OE_52_9_090901_f010.png

Although Bayer-type filter-array approaches are compact, convenient, and robust to perturbations such as temperature changes and vibration, they do have the disadvantage of requiring that the image is spatially bandlimited to the Nyquist limit of the filter array (a limit that is typically several times stricter than the Nyquist limit of the underlying pixel array). Without satisfying this assumption, the resulting reconstructed spectral images may show substantial aliasing effects, in which spatial variations in the scene will couple into erroneous spectral variations in the measured datacube. These effects can be minimized by defocusing the image in order to satisfy the bandlimit constraint.

The spatial/spectral filter-based approach is one approach toward adapting detector arrays to spectral resolving capability. A number of other approaches are also under development, which do not incorporate filters and thus have the potential for an increased detection efficiency. Although development on spectrally resolved detector arrays with more than three spectral bands has been underway for over 40 years,127,128 doing this for more than two spectral channels in snapshot detection mode has only been pursued recently. The first steps in this direction involved dual-band focal plane arrays [such as midwave IR (MWIR)/LWIR FPAs],129,130 but more recently it has involved elements such as cavity-enhanced multispectral photodetectors,131 elements composed of sandwiched electrodes and multiple detection layers,132 multilayer quantum-well infrared photodetectors (QWIPs),133 and transverse field detectors.134

The cavity-enhanced multispectral photodetector is designed by sandwiching several thin layers of amorphous silicon (used as the detection layers) in a resonance-enhanced cavity.135 Sun et al.131 and Wang et al.136 report on using this approach to measure two narrow spectral bands—one centered at 632 nm and another at 728 nm. Parrein et al. follow a closely related approach in which the detection element consists of layers of thin films sandwiched with multiple transparent collection electrodes.132 This measurement method combines the use of wavelength-dependent absorption depth with interference filters to create a stack of sensors having strong wavelength-dependent signal collection. The implementation of Ref. 132 so far allows only three spectral channels to be resolved per pixel, but the approach shows promise to allow resolution of more spectral channels.

Multilayer QWIPs are an alternative approach that has seen considerable research.130 Mitra et al., for example, present an IR detector consisting of a stack of multiple quantum well absorbers coupled through a diffractive resonant cavity.133 So far, this technique has been limited to three spectral channels, though the concept is generalizable to more.

Transverse field detection is a concept recently developed by Longoni et al.,134,137 allowing for depth-resolved detection of absorbed photons within the detection layer. Sensor electrodes spaced along the surface of the detector array are biased to different voltages in order to generate the transverse electric fields needed for each electrode to preferentially collect photocarriers produced at different depths. While this trades off spatial resolution for depth resolution, it provides a flexible method for depth-resolved detection.

In general, for multispectral detection (>3 spectral channels), each of the filterless approaches are still under development, and thus considerable work remains before they can be deployed for practical use.

4.9.

Image-Replicating Imaging Spectrometer (IRIS, 2003)

Lyot invented his tunable filter in 1938 based on the idea of using polarizers to turn the wavelength dependence of retardation in thick waveplates into a wavelength dependence in transmission.138 Although the instrument was refined by others to use a different layout139 and to allow wider fields of view,140,141 it could never measure more than one spectral channel at once. In 2003, Harvey and Fletcher-Holmes described a generalization of Lyot’s filter in which the polarizers are replaced with Wollaston beamsplitting polarizers.61 By splitting each incident beam in two, this technique allows one to view a second spectral channel in parallel. By incorporating N Wollaston polarizers into the system, one can view 2N scenes simultaneously. The resulting layout, for a setup using three Wollaston polarizers, is shown in Fig. 11.

Fig. 11

The system layout for an image-replicating imaging spectrometer (IRIS). (The object simulated here is from http://en.wikipedia.org/wiki/File:Colouring_pencils.jpg.)

OE_52_9_090901_f011.png

The IRIS approach is an elegant solution that makes a highly efficient use of the detector array pixels. So far, the IRIS approach has only been shown to operate with up to eight spectral bands,142 and it seems likely that difficulties of obtaining large-format Wollaston polarizers with sufficient birefringence and that can correct for polarization-dependent chromatic aberrations may limit this approach to about 16 spectral channels.143

4.10.

Coded Aperture Snapshot Spectral Imager (CASSI, 2007)

CASSI was the first spectral imager attempting to take advantage of compressive sensing theory for snapshot measurement. Compressive sensing developed out of the work of Emmanuel Candes, Terence Tao, and David Donoho, typically involving the use of L1-norm reconstruction techniques to reconstruct data that would be termed insufficiently sampled by the Nyquist limit. The phrase is not intended to refer to a broad category of reconstruction algorithms (such as computed tomography) that can sometimes be said to permit compressive measurement.

The concept for CASSI developed from a generalization of coded aperture spectrometry.34 Coded aperture spectrometers replace the entrance slit of a dispersive spectrometer with a much wider field stop, inside which is inserted a binary-coded mask (typically encoding an S-matrix pattern or a row-doubled Hadamard matrix,144 see Fig. 12). This mask attempts to create a transmission pattern at each column within the slit such that each column’s transmission code is orthogonal to that of every other column. This follows directly from the properties of Hadamard matrices that each column of the matrix is orthogonal to every other column. The encoded light, transmitted by the coded mask within the field stop, is then passed through a standard spectrometer back-end (i.e., collimating lens, disperser, reimaging lens, and detector array). Because the columns of the coded mask are orthogonalizable, when they are smeared together by the disperser and multiplexed on the detector array, they can be demultiplexed during postprocessing. The resulting setup allows the system to collect light over a wide aperture without sacrificing the spectral resolution that one would lose by opening wide the slit of a standard slit spectrometer. The tradeoff is a factor of two in light loss at the coded mask, and in some noise enhancement due to the signal processing.

Fig. 12

Top: the system layout for a coded aperture snapshot spectral imager (CASSI), showing only the single-disperser configuration. Bottom: the pattern on the detector array due to imaging a coded aperture mask through a disperser, for an object that emits only three wavelengths (the wavelengths used in the example image here are the shortest, middle, and longest wavelengths detected by the system).

OE_52_9_090901_f012.png

The theory of orthogonal codes only requires that the light is uniformly distributed in one axis; the other axis can be used for imaging. This is analogous to a slit spectrometer, which can image across its entrance slit. Using an anamorphic objective lens, one can achieve this by imaging the entrance pupil onto the field stop in one axis, while imaging the object onto the field stop along the orthogonal axis. Although one can consult references 145, 146, or 147 for further details on coded aperture spectral imaging, none of these sources mention the requirements for anamorphic front optics needed to achieve line-imaging with snapshot coded aperture spectrometry.

Compressive sensing allows one to take a similar procedure and apply it to snapshot spectral imaging, measuring (x,y,λ) in a snapshot and not just (x,λ). The primary differences from the slit imaging case are that the aperture code is no longer an orthogonal matrix but a random binary matrix and that the reconstruction algorithm becomes much more complex. The generalization proceeds as follows. If one replaces the anamorphic objective lens with a standard one, and images the object directly onto the coded aperture mask, then the irradiance projected onto the detector array after passing through the disperser will be a mix of spatial and spectral elements of the datacube (see Fig. 12). The spatial-spectral projection at the detector array is modulated by the binary mask in such a way that each wavelength of the datacube experiences a shifted modulation code. If this code satisfies the requirements of compressive sensing, then this is all one needs in order to use compressive sensing reconstruction algorithms to estimate the object datacube.34 The resulting system layout is not only extremely compact, but also uses only a modest size detector array, so it is capable of imaging at high frame rates. Wagadarikar et al. showed the ability to capture 248×248×33 datacubes at 30 fps, though postprocessing of the raw data to produce a hypercube (video datacube sequence) consumed many hours of computer time.148 More recent algorithms are much faster.36

Though compressive sensing holds out great promise for future instrument development, designers have not yet succeeded in creating an architecture that replicates its basic requirements well. In rough form, one can summarize the requirements as follows. The most basic feature is that the measured object has to be compressible in some space. For example, for an image represented in a wavelet space, it is well known that one can almost always use fewer coefficients than the number of pixels you wish to reconstruct in the pixel domain—this is a compressible space for a typical image. If one were to measure the object such that each measurement value was the projection of the object onto a basis function in this compressive space, then one would need far fewer measurements than if the system measured all of a datacube’s voxels directly. Unfortunately, one generally cannot design an optical system such that the object can be measured in the compressive space directly (such as measuring an object’s wavelet coefficients). Compressive sensing, however, provides an alternative that is almost as good. Measurement vectors (we avoid calling them basis functions because they do not satisfy the general definition of a basis) that are composed of columns within a random incoherent measurement matrix have been shown to replicate the properties of measuring in a compressive basis to very high probability. Using this type of measurement matrix, however, comes with some additional requirements. First, the measurement vectors and the compressible basis functions must be mutually incoherent, which means that any element in one cannot be expressed as a sparse linear combination of elements from the other.149 One can think of this in rough terms as having measurement vectors highly spread out when expressed in the basis vectors of the chosen compressible space, and vice versa. Also, the measurement vectors must satisfy isotropy, which means that they should have unit variance and are uncorrelated.150 Orthogonal matrices are one example of systems that satisfy this property.

Once a measurement is completed, the user applies a reconstruction algorithm to estimate the object datacube. The typical procedure for this is for the user to choose the compressible basis in which to work (that is, one must know a priori a basis in which the object is compressible) and apply an algorithm to estimate both the magnitudes of the coefficients and the set of measurement vectors comprising the space described by the product of the sensing and signal compression matrices. The algorithm estimates the coefficients and basis vectors by choosing an object representation that best approximates the actual measurement while penalizing representations that have a larger number of coefficients (i.e., are less sparse). For CASSI, a common choice of basis has been total variation space in the x-y dimensions (i.e., the object’s gradient image is assumed to be highly compressible). While it is possible to adapt the reconstruction algorithm to search for an optimal compressible basis (so that the user need not know this a priori), this greatly burdens an already computationally intensive problem.

The mathematics underlying this measurement approach has seen a rapid advance in the last decade, but implementing its requirements in hardware has been challenging. One of the main obstacles is that, in order to obtain sufficient compression, the feature sizes used to create coded projections are near the scale of the optical resolution. Not only does this mean that characterizing the system measurement matrix requires a great deal of care, but since the compressive sensing reconstruction methods are currently also sensitive to perturbations of the system matrix, the system is prone to artifacts. Because of these issues, subsequent implementations of CASSI have used linear scanning, such that a 640×480×53 datacube was reconstructed from a set of 24 frames (for a full collection time of about 2 s).151 In comparison with equivalent scanning instruments, this final result is disappointing, as even the authors admitted (see the concluding remarks of Ref. 36), so it appears that considerable work remains before we can take advantage of the compressive sensing paradigm.

4.11.

Image Mapping Spectrometry (IMS, 2009)

Image slicing, as accomplished by the IFS-M technique discussed in Sec. 4.1, is best suited for measurements with low spatial and high spectral resolution. For many applications such as microscopy, however, spatial sampling is the more important quantity, and spectral sampling with only 10 to 40 elements is more common. This makes the IFS-M an impractical approach in this field. While the microslicing implementation (IFS-μ) is capable of achieving a much higher spatial sampling, this comes at the cost of a serious increase in system design complexity. An alternative approach is IMS. Like IFS-M, a microfaceted mirror is placed at an image plane. Unlike image slicing, however, many of the mirror facets share the same tilt angle, so that multiple slices of the image are mapped to each individual pupil. The resulting pattern, as seen by the detector array, resembles that of seeing a scene through a picket fence. If there are nine individual pupils in the system, then the spaces between the fence’s slats are 1/9th the slat widths (see Fig. 13). One can see only thin slices of the scene, but there are nine facet images on the detector array, each representing the image shifted by 1/9th of a slat width relative to the others. Assembling all nine subimages thus allows one to replicate the original scene. The advantage of obtaining these facet images is that one has separated the elements of the scene enough so that there is space to allow dispersion. By allowing each pupil to be shared among many mirror facets, the system design becomes much more compact and allows for higher spatial resolution.

Fig. 13

The image mapping spectrometer (IMS) system layout. Three different raw images are shown corresponding to a setup in which (a) the lorikeet is being imaged through the full system (as shown), (b) the lorikeet is being imaged in a system in which the prism array has been removed, and (c) a spectrally and spatially uniform object is being imaged through the full system. (For clarity, in all three examples of raw detector data shown here, the data are shown in color as if imaged by a color detector array, even though a monochromatic array is used in existing instruments.) (The lorikeet image is from http://commons.wikimedia.org/wiki/File:Rainbow_Lorikeet_MacMasters.jpg.)

OE_52_9_090901_f013.png

The first IMS instrument (called an ISS at the time) provided a 100×100×25 datacube using a large-format CCD array,62 but this was later improved to 350×350×46.152

As with image slicing (IFS-M), the primary drawback of the IMS is the need for very high precision for cutting the image mapper, which is the central element of the system. Current ultraprecision lathes have advanced to the point where it is possible to make these elements on monolithic substrates, though considerable care is involved.

4.12.

Snapshot Hyperspectral Imaging Fourier Transform Spectrometer (SHIFT, 2010)

The SHIFT spectrometer 63,153 performs its spectral measurement in the time domain and acquires image information using a division of aperture approach. Conceptually, the idea is an extension of an earlier formulation—the multiple-image Fourier transform spectrometer (MIFTS) developed by Hirai in 1994.58 However, while the original MIFTS was based on a Michelson interferometer and lens array, the SHIFT spectrometer is based on a pair of birefringent Nomarski prisms behind a lenslet array. As depicted in Fig. 14, an N×M lenslet array images a scene through two linear polarizers surrounding a pair of Nomarski prisms. Thus, N×M subimages are formed on a detector array. Rotating the prisms by a small angle δ relative to the detector array enables each one of the subimages to be exposed to a different OPD. Therefore, a 3-D interferogram cube can be assembled by sequentially extracting each one of the subimages. Fourier transformation, along the OPD axis of the interferogram cube, enables reconstruction of the 3-D datacube. This prism-based design allows for a reduced system volume and an improved robustness to vibration.

Fig. 14

The system layout for a snapshot hyperspectral imaging Fourier transform spectrometer (SHIFT).

OE_52_9_090901_f014.png

As with the original MIFTS system, the SHIFT can be considered as the Fourier transform analog of the MAFC, and it offers many of the same advantages, such as compactness. Unlike the MAFC, however, it also offers continuously sampled spectra and is more easily fabricated due to its use of birefringent prisms. On the other hand, it also shares the MAFC’s disadvantage of suffering from parallax effects.

4.13.

Multispectral Sagnac Interferometer (MSI, 2010)

The MSI64 is an extension of channeled imaging polarimetry154 to imaging spectroscopy. The idea was conceptually demonstrated using the MSI depicted in Fig. 15. In this interferometer, incident light is divided by a beamsplitter into two counter-propagating components. The component that was initially reflected by the beamsplitter begins its propagation in the z direction, where it gets diffracted away from the optical axis by grating G2. Reflection off of mirrors M2 and M1 guides the beam to grating G1, where the beam is diffracted in the opposite direction. The converse path is taken by the component that was initially transmitted by the beamsplitter, thus allowing both beams to exit the interferometer collimated and dispersed laterally. The lateral dispersion induced by the gratings produces a lateral shear S between the emerging wavefronts, where S is linearly dependent on the free-space wavelength λ. Additionally, gratings G1 and G2 are multiple order diffractive structures; i.e., the blaze contains deep grooves that impose more than one wave of OPD. When these high-order sheared beams converge through an objective lens and onto a detector array, a 2-D spatial interference pattern is generated. The spatial frequency of this interference is directly proportional to the diffraction order and, therefore, directly related to the given order’s spectral transmission.

Fig. 15

The system layout for a multispectral Sagnac interferometer (MSI).

OE_52_9_090901_f015.png

These interference fringes are measured by the detector array as a superposition of coincident amplitude-modulated spatial carrier frequencies. A 2-D Fourier transformation can be taken of the raw data to window and filter the amplitude-modulated channels in the frequency domain. Inverse Fourier transformation yields the 2-D spatial data that correspond to the unique spectral passbands generated by the gratings.

This system is essentially a multispectral approach in which many unique spectral slices are measured simultaneously on coincident interference fields. Its advantages include inherent spatial coregistration between the bands while offering simple postprocessing. However, its disadvantages lie in its implementation, namely, the spectral bands must correspond to the grating’s orders. Additionally, only one dimension in the Fourier space can be used to modulate spatial and spectral information. Therefore, more work must be done to make this technique a viable competitor to any of the other methods mentioned here.

5.

Technology Comparisons

There are many ways to compare the various snapshot implementations, such as compactness, speed, manufacturability, ease of use, light efficiency, and cost. And while these are all important, different system designers have different opinions about each of these factors, so that any discussion can quickly devolve into an argument. In an attempt to avoid explicitly taking sides, we have opted to compare the various technologies on a more fundamental level—the efficiency with which they make use of their detector elements. Snapshot spectral imagers generally make use of large detector arrays and can push the limits of existing detector technology, so that their efficiency in using detectors correlates closely with other important issues such as compactness, speed, and cost. Allington-Smith1 has previously termed this metric the specific information density Q: the product of the optical efficiency η (i.e., average optical transmission times the detector quantum efficiency) with what can be called the detector utilization ζ. The utilization is the number of Nyquist-resolved elements R in the imaging spectrometer datacube divided by the number of detection elements M (pixels) required to Nyquist-sample those voxels. Here R=RxRyRw, where Rx, Ry, Rw denote the datacube resolution elements in the x, y, and λ directions. We modify the definition of ζ slightly from that of Allington-Smith so that the numerator in ζ instead represents the number of voxel samples N required to achieve R. Thus, for a Nyquist-sampled system, the two definitions for Q differ by a factor of two in each dimension, whereas Allington-Smith’s ideal value for Q is 1/8, and the ideal value under our definition is Q=1. Letting Mu, Mv denote the 2-D detector sampling elements, we have

Q=ηNxNyNwMuMv
for optical efficiency η. Allington-Smith also obtains specific formulas for Q for each instrument in terms of the system design parameters, such as the aperture diameter and system magnification. In order to show that the value for Q among technologies stems from even more fundamental considerations than these, we assume ideal conditions for each instrument type and derive the relevant efficiency from the required margins at the focal plane needed to prevent significant crosstalk among elements of the datacube. Here crosstalk is defined as the condition where multiple voxels within the measured datacube each collect a significant amount of signal from the same voxel in the true object datacube and where these two voxels are not physically adjacent to one another in the datacube. For voxels satisfying this condition but that are physically adjacent, we can call the effect blur rather than crosstalk.

For optical efficiency estimates η for each technology, we assume ideal components, so that lenses, mirrors, prisms, and gratings are assumed to have no losses (100% transmission or reflectivity), and that all detectors have an external quantum efficiency of 1.

One of the reasons why we choose the detector utilization ζ to define a metric for comparing technologies is that it is in many ways a proxy for other important measures such as manufacturability and system size. The connection arises because, in various ways, all of the snapshot techniques encode the spectral information by expanding the system étendue. If all things are held constant except for the wavelength-dimension of the cube, then, in every instance, increasing Nw requires increasing étendue. And this quickly runs into difficult design constraints—for high-performance systems one can only increase étendue by using larger and more expensive optics (i.e., larger-diameter optical elements that can also handle a wide range of angles). Thus, snapshot systems with lower ζ will generally reach this design ceiling before the higher ζ systems will, and either system size or the angular acceptance of the optics must compensate for the difference in ζ.

The basic premise from which we derive the detector utilization ζ for each technology is that each technique requires a margin around each subsection of the datacube, without which blurring will cause significant crosstalk. For some technologies, smaller margins are easier to achieve than for others, but this factor is ignored here. Those technologies that minimize the number of marginal pixels make the most efficient use (have the highest utilization ζ) of a given detector array, but the actual value of ζ depends on the aspect ratios of the datacube dimensions. For example, from Fig. 16 we can see that the IFS-L and IFS-F technologies use a similar format of projecting elements of the datacube onto a 2-D detector array: each individual spectrum is dispersed, and because its neighbor spectrum is not necessarily a neighboring spatial element, a margin must be used around each spectrum to minimize crosstalk. If each spectrum is allowed a margin s pixels, then the number of detector pixels M needed to capture an (Nx, Ny, Nw) datacube can be determined by the following calculation. For each individual spectrum in an IFS-L or IFS-F, Fig. 16 shows that we need Nw pixels for the spectrum itself, 2Nw pixels for the top and bottom margins around each spectrum, and 6 pixels for the margins on the two ends. Doing the same calculation for s>1, we see that each spectrum uses a rectangle on the detector array with dimensions (Nw+s)×(2s+1). Multiplying this by the total number of spectra in the datacube, NxNy, we have

MIFS-F=NxNy(Nw+s)(2s+1).

Fig. 16

Diagrams showing how the detector utilization formulas are calculated for each architecture, given the basic layout of how the datacube is projected onto the two-dimensional detector array. Each square shown here represents a single pixel on the detector array. For clarity, each subfigure assumes Nx=Ny=5, Nw=16, and s=1. This value for the margin s is a practical minimum; working instruments use s1 in order to prevent serious problems with crosstalk. The MSI data are not shown here because of its complex layout.

OE_52_9_090901_f016.png

The value for ζN/M follows directly from this equation as

ζIFS-F=NxNyNwNxNy(Nw+s)(2s+1)=Nw(Nw+s)(2s+1).

If the system architecture requires two pixels to measure each voxel in the datacube, then the utilization is ζ=0.5.

For the IFS-M, IFS-μ, and IMS technologies, an Ny×Nw swath is measured in a contiguous region on the detector array, so that each swath requires a rectangular space of (Nw+2s)(Ny+2s). Multiplying by the total number of x-resolution elements in the datacube gives

MIFS-M=Nx(Ny+2s)(Nw+2s).

For the IRIS, TEI, MSBS, and MAFC technologies, each single-channel slice of the datacube is measured as a contiguous region, so that each wavelength requires a rectangular space of (Nx+2s)(Ny+2s), and the total number of pixels needed is

MIRIS=(Nx+2s)(Ny+2s)Nw.

For the filter-array implementation of SRDA, each pixel samples an individual voxel, so that the utilization is inherently equal to 1. In the case of CASSI, we find that MCASSI=(Nx+Nw1)Ny<N—that is, the utilization is >1. In fact, the greater the number of wavelengths in the datacube, the greater the utilization for CASSI. Note that, due to problems achieving the micron-scale imaging required to map code elements 11 to detector pixel elements, existing CASSI instruments map code elements 12, so that they use about four times as many detector pixels as the theoretical value given here, i.e., MCASSI(practical)=4(Nx+Nw1)Ny.

There are two architectures used for CASSI: a single-disperser design (CASSI-SD) and a dual-disperser configuration (CASSI-DD). Ref. 155, in describing the CASSI-DD measurement principle, contains errors implying that CASSI-DD is even more compressive than CASSI-SD, so that MCASSIDD=NxNy, achieving a detector utilization equal to Nw. The error can be found by a careful look at Fig. 1 in Ref. 151, as well as in the mathematical description of Sec. 2 there. Whereas the authors indicate that the form of the data at the detector array is a cube, the architecture shows that it must in fact be a skewed cube (an “oblique cuboid”). This error also implies that the results reported in the paper are in fact corrupted with spatial misregistration errors.

Table 2 summarizes the η and M values used to calculate Q for each technology. In the table, note that for the computational sensors (CTIS and CASSI), the number of datacube voxels is related to the number of resolution elements N not through the Nyquist sampling limit but through more complex criteria. When calibrating these computational sensors, M is technically an arbitrary value, but in practice one finds little value in allowing M to exceed the values shown in the table. In addition, for the SRDA row in Table 2, it is assumed that the implementation uses the filter-array camera. From Table 2 we can see that the MAFC/MSBS technologies offer the highest Q for high spatial/low spectral resolution datacubes (squat-shaped cubes), whereas the IFS-M/IFS-μ/IMS options offer the highest Q for low spatial/high spectral resolution datacubes (tall cubes). The latter do especially well when the spatial dimensions of the datacube are rectangular NxNy. As indicated in Table 2, the IRIS approach behaves exactly as the MAFC/MSBS technologies, but loses a factor of two due to its need to work with polarized input. The IFS-L/IFS-F approaches suffer 3× loss in Q relative to the mirror-based IFS technologies due to the extra factor of (2s+1) shown in the formula for M given in Table 2, arising from the need to separate all spatial elements from one another to avoid crosstalk.

Table 2

The classification type of each technology, and ideal values for the optical efficiency η and number of detector pixels used (M=Nζ) for each snapshot technology.

TechnologyClassηM (pixels used)
IFS-FF1NxNy(Nw+s)(2s+1)
IFS-LF1NxNy(Nw+s)(2s+1)
IFS-MF1Nx(Ny+2s)(Nw+2s)
IFS-μF1Nx(Ny+2s)(Nw+2s)
IMSF1Nx(Ny+2s)(Nw+2s)
IRISA1/2(Nx+2s)(Ny+2s)Nw
MAFCP1(Nx+2s)(Ny+2s)Nw
MSBSA1(Nx+2s)(Ny+2s)Nw
MSIF1/4NxNy(2Nw+1)
SHIFTP1/4(Nx+2s)(Ny+2s)Nw
SRDAF1NxNyNw
TEIA+F1/Nw(Nx+2s)(Ny+2s)Nw
CTISA*1/3N
CASSIX*1/2Ny(Nx+Nw1)
Note: Classifications: division of amplitude (A), division of field (F), division of pupil (P), randomly encoded (X), and computational sensor ([]*).

Each of the technologies listed in Table 2 is also classified according to the method used to divide the light into voxel elements. The majority of technologies use division of field [F] (also called division of focal plane), in which the light is either filtered or divided into separate beams according to its placement within the image. Division of amplitude [A] is the next most common method in which the light is divided into separate beams by allocating a portion of light into each beam, as a simple cube beamsplitter does. Only two other methods exist: division of pupil [P] (also called division of aperture) and compressive sensing [X].

5.1.

Comments on Instrument Throughput

Fellgett156 and Jacquinot157 were the first researchers to compare light collection efficiency for various spectrometer technologies and to lead to categorizing what are now commonly referred to as the Fellgett (multiplex) advantage and the Jacquinot (throughput) advantage, both of which are widely associated with Fourier transform and Fabry-Perot spectroscopy.158 More recently, researchers have argued that with the advance of detectors to 2-D array formats and with the majority of optical detector arrays used, from the ultraviolet to MWIR, now being shot-noise-limited, both of these advantages no longer provide the improvement in SNR that they once did.159 Sellar and Boreman, on the other hand, argue that while this appears to be true for the Fellgett advantage, imaging Fourier transform spectrometers (imaging FTS, or IFTS) retain the Jacquinot advantage not because of their higher étendue but because they are able to maintain a longer dwell time on each datacube voxel than alternative technologies can.38 The authors also provide a convincing case that the Jacquinot advantage can be considered as freedom from the requirement of having an entrance slit, while the Fellgett advantage can be considered as freedom from the requirement of having an exit slit. For filterless snapshot imaging spectrometers, both of the traditional advantages are automatically satisfied: no exit slit is used (Fellgett), and the instrument dwell time on every voxel is equal to the full measurement period (Jacquinot).

It is useful to note that FTS approaches to scanning and snapshot spectral measurement suffer from sampling efficiency losses at low spectral resolution, since a substantial portion of the reconstructed spectrum (located at very low wavenumbers) will lie outside the system’s true spectral response range and so must be discarded. In detector-noise-limited applications, this effect is mitigated by the fact that while a fixed percentage of these samples do not contribute to actual spectral samples after Fourier transformation, they do contribute to improving SNR in the measured spectrum.160,161 Spatial heterodyne interferometry can be used to overcome the FTS’ sampling limitation at low spectral resolution.162165

In a previous publication, we have tried to steer the throughput comparison argument away from its historical focus on étendue101 since the complexity of modern snapshot instruments makes any fundamental limits on étendue difficult to determine. Moreover, it has also been argued that, for scanning instruments at least, the differences in étendue among different technologies is not large.38 Rather, we try to focus on a more important factor—the portion of datacube voxels that are continuously visible to the instrument. For scanning systems, this portion can be quite low (often <0.01), while filterless snapshot systems can achieve a value of 1 (i.e., all voxels are continuously sensed during the measurement period). This creates a large difference in light collection—a difference we have termed the snapshot advantage. While a snapshot instrument’s absence of motion artifacts and ability to work without moving parts are both important, the snapshot advantage in light collection is the difference from scanning systems that holds the most promise for opening up new applications.

While not all snapshot implementations can be considered equal, Table 2 indicates that all but one technology (TEI) share optical efficiency values within a factor of four of one another. For most of the technologies summarized in the table, the efficiency values shown are straightforward to obtain and are generally not subject to major disagreement. Perhaps surprisingly, MAFC is the exception. As the argument leading to our choice of η=1 for MAFC requires a lengthy discussion, it has been moved to the Appendix.

5.2.

Using Snapshot Instruments in Scanning Applications

Pushbroom-configuration spectral imagers have long been used on moving platforms in remote sensing because their view geometry is well suited to the system measurement geometry: linear motion along one axis provides the needed scanning to fill out the third dimension of the data, so that no moving parts are required in the system. Up until now, snapshot spectral imaging systems have been absent from environmental remote sensing, though computational speeds and data transmission rates have reached a level at which one can now fully utilize the snapshot advantage in light collection to improve SNR. Because these types of measurements take place from a moving platform, achieving the snapshot advantage requires performing what one can call video-rate software time delay integration (TDI). That is, with each acquired frame, the system must coregister and add the new datacube with the previous set in order to build a large high-SNR single datacube from the entire sequence of data. Figure 17 shows how this works. Unlike with hardware TDI,166169 where the data are not actually digitized until they have been fully summed, software TDI performs the summing after digitization and so is more prone to detector and digitization noise. In the regime of shot-noise-limited data, however, these effects are small. While it is possible, in principle, to design specialized detector arrays that would be capable of performing hardware TDI for a given snapshot imaging spectrometer, these arrays would be highly specialized and thus expensive and difficult to obtain.

Fig. 17

A diagram showing the two types of time-domain integration (TDI): (a) hardware TDI and (b) software TDI. In conventional (hardware) TDI, the pixel data are shuttled across the detector at the same rate as the image motion. The sequence shown here consists of five frames in which the image is moving diagonally across the array. In (b), the same basic concept is employed, but instead of shuttling charges across the array, the datacubes are first digitized and then registered/added in postprocessing. (The lorikeet image is from http://commons.wikimedia.org/wiki/File:Rainbow_Lorikeet_MacMasters.jpg.)

OE_52_9_090901_f017.png

As an illustration, a snapshot system capable of collecting a 200×500×200 datacube at standard frame rates (on the order of 100 fps) can use software TDI to dwell on a given spatial region for 200 times longer than an equivalent pushbroom spectrometer can, allowing for a factor of 200=14 improvement in SNR for shot-noise-limited data. Recent advances in data transmission formats (such as multilane CoaXPress, Camera Link HS, and SNAP12 fiber optics) have shown that the transmission rates required by such a setup are now achievable in commercially available hardware. Moreover, because of the parallel nature of the software TDI operation on datacubes, recent GPUs can process this data stream at high-enough rates to keep up. Together, these developments make the full snapshot advantage realizable even for moving platforms that are nominally optimized for pushbroom operation.

5.3.

Disadvantages of Snapshot

Snapshot approaches are not without their tradeoffs. The system design is generally more complex than for scanning systems and makes use of recent technology such as large FPAs, high-speed data transmission, advanced manufacturing methods, and precision optics. Moreover, the snapshot advantage in light collection often can be fully realized only by tailoring the design to its application. For example, we have taken outdoor measurements with an (Nx,Ny,Nw)=(490,320,32) snapshot imaging spectrometer that reads out at 7 fps, but used an exposure time of only 6 ms to avoid saturation. This exposure time is poorly matched to the readout rate, so that most of the snapshot system’s light collection advantage is thrown away. An application for which these are much better matched would require a much dimmer scene or a much faster readout rate.

It is also important to recognize that there are measurement configurations for which snapshot spectral imaging is actually impossible to realize. Confocal microscopy is a clear example. Here the light is confined by a small aperture in order to reject light emerging from unwanted regions of the sample (i.e., outside the focal volume).170 Thus, this method for rejecting unwanted light means that only one spatial point is in view at any given time, and one must scan the optics about the sample in raster fashion in order to generate a complete (x,y,λ) spectral image. By its nature this prevents a snapshot implementation. On the other hand, for the case of volumetric imaging microscopy, there exists an alternative technique—structured illumination microscopy—that is compatible with widefield imaging, and thus with snapshot spectral imagers, and that in some cases can achieve better SNR than can confocal microscopy.152,171

An additional difficulty with using snapshot systems is the sheer volume of data that must be dealt with in order to take full advantage of them. Only recently have commercial data transmission formats become fast enough to fully utilize a large-format snapshot imaging spectrometer for daylight scenes. (Multi-lane CoaXPress is an example of such a format.) There are ways of reducing the data glut. For moving platforms, performing software TDI prior to transmitting the data allows high SNR data without requiring any bandwidth beyond those used by scanning systems. For target detection and tracking systems, one can process filter detection algorithms onboard prior to transmitting the data, so that rather than the full cube, one only needs to transmit the detection algorithm result. For transmitting complete datacubes, one can also resort to onboard compression.172

6.

Conclusions

Over the past 30 years, scanning techniques have seen an impressive improvement in performance parameters, including calibration stability, SNR, and spatial, spectral, or temporal resolution. This trend can be attributed to larger detector arrays, reduced detector noise, improved system design, and better optical/optomechanical manufacturing; but the underlying technology and concepts have not changed significantly in this time period.

The advent of large-format (4 megapixel) detector arrays, some 20 years ago, brought with it the capability to measure millions of voxels simultaneously, and it is this large-scale measurement capacity that makes snapshot spectral imaging practical and useful. Almost all research in snapshot spectral imagers use novel 2-D multiplexing schemes, each of which involve fundamental tradeoffs in detector pixel utilization, optical throughput, etc. While many advantages can be realized for these snapshot systems over their temporally scanned counterparts, it is only by making use of large arrays of detector elements that these advantages can be achieved. And it is only in the past 10 years that the spatial and spectral resolution achieved by snapshot imaging systems has become sufficient that the devices are now commercially viable. We can anticipate that the snapshot advantage will open up a number of new applications that leverage the improvements in light collection, temporal resolution, or ruggedness. The next 10 years should see further improvements in the technologies reviewed here with continued advancements in detector array technology, optical fabrication, and computing power.

Appendices

Appendix:

Optical Efficiency of MAFC Systems

In order to explain our choice of η=1 for MAFC’s efficiency factor, we attempt to provide an argument from two sides and explain why we feel one perspective should be given more weight.

One way to view the optical throughput is from a voxel’s view of the system. That is, we consider a voxel emitting light as a Lambertian source and thus fully and uniformly illuminating the instrument’s pupil. When comparing each instrument, if we set the pupil area and system focal length (and thus the f-number) to be the same for all instruments, then the efficiency is simply the fraction of light entering the pupil that reaches the detection layer. For the MAFC, light emitted by the object voxel illuminates the system pupil, but only one of the Nw lenses in the MAFC objective lens array can transmit the voxel’s light to the detection layer. The remaining Nw1 lenses have to reject this voxel’s light, so from the voxel perspective, the MAFC is effectively performing spatial filtering of the pupil, so that the transmitted light flux is reduced by 1/Nw. And thus this perspective argues for giving the MAFC an efficiency η=1/Nw.

A second way to view the optical efficiency is to see how the system’s optical efficiency scales with a change in the number of wavelengths Nw. For the MAFC, increasing Nw means increasing the number of lenses in the objective lens array, and with them the number of filters as well. When scaling a lens array like this, if one momentarily ignores the filters, it can be readily observed that the irradiance on the detector is invariant to scale when viewing extended objects. This contradicts the previous voxel view—a fact that can be explained as follows.

When we increase the number of lenses in the array by a scale factor S2 (e.g., S=2 increases the number of lenses by four), the lens focal lengths drop by the factor S. If the scaled, smaller lenslets have the same f-number that the unscaled, larger lenslets had, then we should find that the irradiance at the focal plane (ignoring filters) is independent of the number of lenslets when imaging an extended object. But this would appear to conflict with the voxel view expressed above. The feature that is easy to miss is that by scaling the lenses we have changed the magnification, so that the voxel we were imaging in the unscaled system is now 1/S2 of a voxel in the scaled system. The scaled version is effectively integrating across larger regions of the object. One can also explain this as an increase in the étendue of the pupil. That is, after scaling the system, the maximum range of angles passed by the pupil (what may be called the pupil acceptance angle) has increased by S in each axis. And this is the difference between the two perspectives on how to measure the system’s efficiency: the pupil étendue of the scaled system is S2 times that of the unscaled system, effectively cancelling out the 1/Nw optical efficiency factor. Thus, although increasing Nw means that the spectral filters in the system will transmit a smaller fraction of the incident light, the shorter focal length lenses allow a higher étendue, and the two effects cancel to give η=1.

At this point, neither of the two views can be described as the correct view—they both simply describe different properties of the system. So we make use of some empirical observations. In general, one finds that MAFC systems have a substantially larger system pupil (the pupil of each individual objective lens multiplied by the number of lenses in the array) than do the objective lenses of other instruments such as CTIS, IRIS, IMS, etc. Moreover, one can observe that the MAFC also has a larger acceptance angle of the pupil than do the other systems. However, if we compare MAFC’s pupil étendue with, say, IMS’s étendue—measured not at the IMS objective lens pupil but rather at the system pupil of the IMS’s back-end lenslet array—then we obtain comparable values. Thus, the limiting étendue of these systems is generally determined not by their monolithic front-optics but rather by their optical arrays. Although this stretches the definition of optical efficiency from a system performance perspective, it is clear that we should choose the system scaling view over the voxel view, so that MAFC’s efficiency factor η is 1 and not 1/Nw.

References

1. 

J. Allington-Smith, “Basic principles of integral field spectroscopy,” New Astron. Rev., 50 (4–5), 244 –251 (2006). http://dx.doi.org/10.1016/j.newar.2006.02.024 NAREF9 1387-6473 Google Scholar

2. 

P. J. C. Janssen, “Sur la méthode qui permet de constater la matière protubérantielle sur tout le contour du disque solaire [On a method to see the entire solar corona],” Comptes Rendus Acad. Sci., 68 713 –715 (1869). Google Scholar

3. 

P. J. C. Janssen, “Sur l’étude spectrale des protubérances solaires [Spectral study of solar prominences],” Comptes Rendus Acad. Sci., 68 93 –95 (1869). Google Scholar

4. 

A. A. Michelson, “On the conditions which affect the spectro-photography of the sun,” Astrophys. J., 1 (1), 1 –10 (1895). http://dx.doi.org/10.1086/140001 ASJOAB 0004-637X Google Scholar

5. 

G. E. Hale, “The spectrohelioscope and its work. Part I. History, instruments, adjustments, and methods of observation,” Astrophys. J., 70 (5), 265 –327 (1929). http://dx.doi.org/10.1086/143226 ASJOAB 0004-637X Google Scholar

6. 

C. FabryA. Pérot, “Sur les franges des lames minces argentées et leur application à la mesure de petites épaisseurs d’air [On the fringes of thin layers of silver and their application to the measurement of small thicknesses of air],” Annales de Chimie et de Physique, 12 459 –501 (1897). ACPHAA 0365-1444 Google Scholar

7. 

A. PerotC. Fabry, “Sur l’application de phénomènes d’interférence à la solution de divers problèmes de spectroscopie et de métrologie [On the application of interference phenomena to the solution of various problems in spectroscopy and metrology],” Bulletin Astronomique, Serie I, 16 5 –32 (1899). Google Scholar

8. 

C. FabryA. Perot, “On a new form of interferometer,” Astrophys. J., 13 265 –272 (1901). http://dx.doi.org/10.1086/140817 ASJOAB 0004-637X Google Scholar

9. 

I. S. Bowen, “The image slicer, a device for reducing loss of light at slit of stellar spectrograph,” Astrophys. J., 88 (2), 113 –124 (1938). http://dx.doi.org/10.1086/143964 ASJOAB 0004-637X Google Scholar

10. 

T. WalravenJ. H. Walraven, “Some features of the Leiden radial velocity instrument,” Auxiliary Instrumentation for Large Telescopes, 175 –183 European Southern Observatory(1972). Google Scholar

11. 

A. K. Pierce, “Construction of a Bowen image slicer,” Publ. Astron. Soc. Pac., 77 (456), 216 –219 (1965). http://dx.doi.org/10.1086/128199 PASPAU 0004-6280 Google Scholar

12. 

E. H. Richardson, “An image slicer for spectrographs,” Publ. Astron. Soc. Pac., 78 (465), 436 –437 (1966). http://dx.doi.org/10.1086/128382 PASPAU 0004-6280 Google Scholar

13. 

E. H. Richardson, “The spectrographs of the dominion astronomical observatory,” J. Roy. Astron. Soc. Canada, 62 (6), 313 –330 (1968). JRASA2 0035-872X Google Scholar

14. 

D. Hunten, “Reshaping and stabilization of astronomical images,” Astrophysics Optical and Infrared. Series: Methods of Experimental Physics, 193 –220 Academic Press, New York (1974). Google Scholar

15. 

L. Weitzelet al., “3D: the next generation near-infrared imaging spectrometer,” Astron. Astrophys. Suppl. Ser., 119 531 –546 (1996). http://dx.doi.org/10.1051/aas:1996266 AAESB9 0365-0138 Google Scholar

16. 

G. AvilaC. Guirao, “A Bowen-Walraven image slicer with mirrors,” (2013) http://spectroscopy.wordpress.com/2009/05/12/inexpensive-image-slicer-with-mirrors/ ( January ). 2013). Google Scholar

17. 

N. S. Kapany, “Fiber optics,” Concepts of Classical Optics, 553 –579 Dover, Mineola, NY (2004). Google Scholar

18. 

C. Vanderriest, “A fiber-optics dissector for spectroscopy of nebulosities arounds quasars and similar objects,” Publ. Astron. Soc. Pac., 92 (550), 858 –862 (1980). http://dx.doi.org/10.1086/130764 PASPAU 0004-6280 Google Scholar

19. 

S. C. BardenR. A. Wade, “DensePak and spectral imaging with fiber optics,” Fiber Optics in Astronomy. Volume 3 of Astronomical Society of the Pacific Conference Series, 113 –124 Astronomical Society of the Pacific(1988). Google Scholar

20. 

S. ArribasE. MediavillaJ. L. Rasilla, “An optical fiber system to perform bidimensional spectroscopy,” Astrophys. J., 369 (1), 260 –272 (1991). http://dx.doi.org/10.1086/169757 ASJOAB 0004-637X Google Scholar

21. 

G. Courtès, “Méthodes d’observation et étude de l’hydrogène interstellaire en émission,” Annales d’Astrophysique, 23 (2), 115 –244 (1960). Google Scholar

22. 

G. Courtes, “Le télescope spatial et les grands télescope au sol,” Application de la Photométrie Bidimensionelle à l’Astronomie, 241 –269 Astropresse, Toulouse (1980). Google Scholar

23. 

R. Baconet al., “The integral field spectrograph TIGER,” in Very Large Telescopes and Their Instrumentation, 1185 –1194 (1988). Google Scholar

24. 

R. Chabbal, “Recherches expérimentales sur la généralisation de l’emploi du spectromètre Fabry-Perot aux divers domaines de la spectroscopie [Experimental research on widening the use of Fabry-Perot spectrometer to the various fields of spectroscopy],” University of Paris, (1958). Google Scholar

25. 

J. M. HelbertP. LaforieP. Miche, “Nouveau spectromètre intégral haute résolution à étalon Fabry-Pérot utilisable dans tout le domaine visible et proche U. V.,” Rev. Phys. Appl. (Paris), 12 (3), 511 –522 (1977). http://dx.doi.org/10.1051/rphysap:01977001203051100 RPHAAN 0035-1687 Google Scholar

26. 

C. FabryH. Buisson, “L’absorption de l’ultraviolet par l’ozone et la limite du spectre solaire [The ultraviolet absorption by ozone and the limit of the solar spectrum],” J. Physique, 3 196 –206 (1913). JOPQAG 0302-0738 Google Scholar

27. 

C. V. Raman, “The nature of the liquid state,” Current Sci., 11 303 –310 (1942). CUSCAM 0011-3891 Google Scholar

28. 

J. Braithwaite, “Dispersive multispectral scanning.,” (1966). Google Scholar

29. 

D. A. Landgrebe, “Multispectral land sensing: where from, where to?,” IEEE Trans. Geosci. Rem. Sens., 43 (3), 414 –421 (2005). http://dx.doi.org/10.1109/TGRS.2004.837327 IGRSD2 0196-2892 Google Scholar

30. 

A. F. H. Goetz, “Three decades of hyperspectral remote sensing of the earth: a personal view,” Rem. Sens. Environ., 113 S5 –S16 (2009). http://dx.doi.org/10.1016/j.rse.2007.12.014 RSEEA7 0034-4257 Google Scholar

31. 

H. A. Macleod, Thin Film Optical Filters, 3rd ed.Institute of Physics, Philadelphia (2001). Google Scholar

32. 

T. KanadeR. Bajcsy, “Computational sensors,” San Mateo, CA (1993). Google Scholar

33. 

R. G. Baraniuk, “Compressive sensing,” IEEE Sign. Proc. Mag., 24 (4), 118 –124 (2007). http://dx.doi.org/10.1109/MSP.2007.4286571 ISPRE6 1053-5888 Google Scholar

34. 

M. E. Gehmet al., “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express, 15 (21), 14013 –14027 (2007). http://dx.doi.org/10.1364/OE.15.014013 OPEXFF 1094-4087 Google Scholar

35. 

N. HagenE. L. Dereniak, “Analysis of computed tomographic imaging spectrometers. I. Spatial and spectral resolution,” Appl. Opt., 47 (28), F85 –F95 (2008). http://dx.doi.org/10.1364/AO.47.000F85 APOPAI 0003-6935 Google Scholar

36. 

Q. Zhanget al., “Joint segmentation and reconstruction of hyperspectral data with compressed measurements,” Appl. Opt., 50 (22), 4417 –4435 (2011). http://dx.doi.org/10.1364/AO.50.004417 APOPAI 0003-6935 Google Scholar

37. 

A. F. H. Goetzet al., “Imaging spectrometry for Earth remote sensing,” Science, 228 (4704), 1147 –1153 (1985). http://dx.doi.org/10.1126/science.228.4704.1147 SCIEAS 0036-8075 Google Scholar

38. 

R. G. SellarG. D. Boreman, “Comparison of relative signal-to-noise ratios of different classes of imaging spectrometer,” Appl. Opt., 44 (9), 1614 –1624 (2005). http://dx.doi.org/10.1364/AO.44.001614 APOPAI 0003-6935 Google Scholar

39. 

M. T. Eismann, Hyperspectral Remote Sensing, SPIE Press, Bellingham, WA (2012). Google Scholar

40. 

A. R. Harveyet al., “Technology options for imaging spectrometry,” Proc. SPIE, 4132 13 –24 (2000). http://dx.doi.org/10.1117/12.406592 PSISDG 0277-786X Google Scholar

41. 

X. Prieto-Blancoet al., “Optical configurations for imaging spectrometers,” Comput. Intell. Rem. Sens., 133 1 –25 (2008). Google Scholar

42. 

P. D. Athertonet al., “Tunable Fabry-Perot filters,” Opt. Eng., 20 (6), 806 –814 (1981). http://dx.doi.org/10.1117/12.7972819 OPEGAR 0091-3286 Google Scholar

43. 

J. Antilaet al., “Spectral imaging device based on a tuneable MEMS Fabry-Perot interferometer,” Proc. SPIE, 8374 83740F (2012). http://dx.doi.org/10.1117/12.919271 PSISDG 0277-786X Google Scholar

44. 

N. Gupta, “Hyperspectral imager development at Army Research Laboratory,” Proc. SPIE, 6940 69401P (2008). http://dx.doi.org/10.1117/12.777110 PSISDG 0277-786X Google Scholar

45. 

S. PogerE. Angelopoulou, “Multispectral sensors in computer vision,” (2001). Google Scholar

46. 

PotterA. E., “Multispectral imaging system,” U.S. Patent No. 3702735 (1972).

47. 

M. R. Descour, “The throughput advantage in imaging Fourier-transform spectrometers,” Proc. SPIE, 2819 285 –290 (1996). http://dx.doi.org/10.1117/12.258075 PSISDG 0277-786X Google Scholar

48. 

A. R. HarveyD. W. Fletcher-Holmes, “Birefringent Fourier-transform imaging spectrometer,” Opt. Express, 12 (22), 5368 –5374 (2004). http://dx.doi.org/10.1364/OPEX.12.005368 OPEXFF 1094-4087 Google Scholar

49. 

J. M. Mooney, “Angularly multiplexed spectral imager,” Proc. SPIE, 2480 65 –77 (1995). http://dx.doi.org/10.1117/12.210909 PSISDG 0277-786X Google Scholar

50. 

C. Fernandezet al., “Longwave infrared (LWIR) coded aperture dispersive spectrometer,” Opt. Express, 15 (9), 5742 –5753 (2007). http://dx.doi.org/10.1364/OE.15.005742 OPEXFF 1094-4087 Google Scholar

51. 

M. E. Gehmet al., “High-throughput, multiplexed pushbroom hyperspectral microscopy,” Opt. Express, 16 (15), 11032 –11043 (2008). http://dx.doi.org/10.1364/OE.16.011032 OPEXFF 1094-4087 Google Scholar

52. 

BuschK. W., “Multiple entrance aperture optical spectrometer,” U.S. Patent No. 4375919 A (1985).

53. 

S. Chakrabartiet al., “High-throughput and multislit imaging spectrograph for extended sources,” Opt. Eng., 51 (1), 013003 (2012). http://dx.doi.org/10.1117/1.OE.51.1.013003 OPEGAR 0091-3286 Google Scholar

54. 

P. ConnesE. le Coarer, “3-D spectroscopy: the historical and logical viewpoint,” 3D Optical Spectroscopic Methods in Astronomy, 38 –49 Astronomical Society of the Pacific, San Francisco (1995). Google Scholar

55. 

M. A. Bershady, “3D spectroscopic instrumentation,” XVII Canary Island Winter School of Astrophysics, 87 –125 Cambridge University Press, Cambridge, England (2009). Google Scholar

56. 

StoffelsJ.et al., “Color splitting prism assembly,” U.S. 4084180 A (1978).

57. 

T. OkamotoI. Yamaguchi, “Simultaneous acquisition of spectral image information,” Opt. Lett., 16 (16), 1277 –1279 (1991). http://dx.doi.org/10.1364/OL.16.001277 OPLEDP 0146-9592 Google Scholar

58. 

A. Hiraiet al., “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev., 1 (2), 205 –207 (1994). http://dx.doi.org/10.1007/BF03254863 1340-6000 Google Scholar

59. 

I. K. BaldryJ. Bland-Hawthorn, “A tunable echelle imager,” Publ. Astron. Soc. Pac., 112 (774), 1112 –1120 (2000). http://dx.doi.org/10.1086/pasp.2000.112.issue-774 PASPAU 0004-6280 Google Scholar

60. 

G. L. Bilbro, “Technology options for multi-spectral infrared cameras,” (2001). Google Scholar

61. 

A. R. HarveyD. W. Fletcher-Holmes, “High-throughput snapshot spectral imaging in two dimensions,” Proc. SPIE, 4959 46 –54 (2003). http://dx.doi.org/10.1117/12.485557 PSISDG 0277-786X Google Scholar

62. 

L. GaoR. T. KesterT. S. Tkaczyk, “Compact image slicing spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express, 17 (15), 12293 –12308 (2009). http://dx.doi.org/10.1364/OE.17.012293 OPEXFF 1094-4087 Google Scholar

63. 

M. W. KudenovE. L. Dereniak, “Compact snapshot birefringent imaging Fourier transform spectrometer,” Proc. SPIE, 7812 781206 (2010). http://dx.doi.org/10.1117/12.864703 PSISDG 0277-786X Google Scholar

64. 

M. W. Kudenovet al., “White-light Sagnac interferometer for snapshot multispectral imaging,” Appl. Opt., 49 (21), 4067 –4075 (2010). http://dx.doi.org/10.1364/AO.49.004067 APOPAI 0003-6935 Google Scholar

65. 

O. CardonaA. Cornejo-RodríguezP. C. García-Flores, “Star image shape transformer for astronomical slit spectroscopy,” Rev. Mex. Astron. Astrof., 46 (2), 431 –438 (2010). RMAAD4 0185-1101 Google Scholar

66. 

R. Content, “A new design for integral field spectroscopy with 8-m telescopes,” Proc. SPIE, 2871 1295 –1305 (1997). http://dx.doi.org/10.1117/12.269020 PSISDG 0277-786X Google Scholar

67. 

R. Content, “Image slicer for integral field spectroscopy with NGST,” Proc. SPIE, 3356 122 –133 (1998). http://dx.doi.org/10.1117/12.324521 PSISDG 0277-786X Google Scholar

68. 

F. Laurent, “Etude et modelisation des performances de systemes decoupeurs d’images pour l’astronomie: application a l’instrumentation du JWST et du VLT [Modelling image slicer performance for astronomy. Application to JWST and VLT],” Jean Monnet University at Saint Etienne, (2006). Google Scholar

69. 

D. RenJ. Allington-Smith, “On the application of integral field unit design theory for imaging spectroscopy,” Publ. Astron. Soc. Pac., 114 (798), 866 –878 (2002). http://dx.doi.org/10.1086/pasp.2002.114.issue-798 PASPAU 0004-6280 Google Scholar

70. 

R. ContentS. MorrisM. Dubbeldam, “Microslices and low cost spectrographs for million element integral field spectroscopy,” Proc. SPIE, 4842 174 –182 (2003). http://dx.doi.org/10.1117/12.456693 PSISDG 0277-786X Google Scholar

71. 

J. M. Hillet al., “Multiple object spectroscopy: the Medusa spectrograph,” Astrophys. J., 242 (2), L69 –L76 (1984). ASJOAB 0004-637X Google Scholar

72. 

J. Bland-Hawthornet al., “Hexabundles: imaging fiber arrays for low-light astronomical applications,” Opt. Express, 19 (3), 2649 –2661 (2011). http://dx.doi.org/10.1364/OE.19.002649 OPEXFF 1094-4087 Google Scholar

73. 

D. Leeet al., “Characterization of lenslet arrays for astronomical spectroscopy,” Publ. Astron. Soc. Pac., 113 (789), 1406 –1419 (2001). http://dx.doi.org/10.1086/pasp.2001.113.issue-789 PASPAU 0004-6280 Google Scholar

74. 

S. C. BardenJ. A. ArnsW. S. Colburn, “Volume-phase holographic gratings and their potential for astronomical applications,” Proc. SPIE, 3355 866 –876 (1998). http://dx.doi.org/10.1117/12.316806 PSISDG 0277-786X Google Scholar

75. 

J. Allington-SmithR. Content, “Sampling and background subtraction in fiber-lenslet integral field spectrographs,” Publ. Astron. Soc. Pac., 110 (752), 1216 –1234 (1998). http://dx.doi.org/10.1086/pasp.1998.110.issue-752 PASPAU 0004-6280 Google Scholar

76. 

C.-H. ChenR. O. ReynoldsA. Kost, “Origin of spectral modal noise in fiber-coupled spectrographs,” Appl. Opt., 45 (3), 519 –527 (2006). http://dx.doi.org/10.1364/AO.45.000519 APOPAI 0003-6935 Google Scholar

77. 

U. Lemkeet al., “Modal noise prediction in fibre spectroscopy—I. Visibility and the coherent model,” Mon. Not. Roy. Ast. Soc., 417 (1), 689 –697 (2011). http://dx.doi.org/10.1111/mnr.2011.417.issue-1 MNRAA4 0035-8711 Google Scholar

78. 

J. Allington-SmithJ. Bland-Hawthorn, “Astrophotonic spectroscopy: defining the potential advantage,” Mon. Not. Roy. Ast. Soc., 404 (1), 232 –238 (2010). MNRAA4 0035-8711 Google Scholar

79. 

S. G. Leon-SavalA. ArgyrosJ. Bland-Hawthorn, “Photonic lanterns: a study of light propagation in multimode to single-mode converters,” Opt. Express, 18 (8), 8430 –8439 (2010). http://dx.doi.org/10.1364/OE.18.008430 OPEXFF 1094-4087 Google Scholar

80. 

N. Cvetojevicet al., “Characterization and on-sky demonstration of an integrated photonic spectrograph for astronomy,” Opt. Express, 17 (21), 18643 –18650 (2009). http://dx.doi.org/10.1364/OE.17.018643 OPEXFF 1094-4087 Google Scholar

81. 

J. Bland-Hawthorn, “In search of first light: new technologies and new ideas,” New Astron. Rev., 50 (1–3), 75 –83 (2006). http://dx.doi.org/10.1016/j.newar.2005.11.005 NAREF9 1387-6473 Google Scholar

82. 

H. Matsuokaet al., “Single-cell viability assessment with a novel spectro-imaging system,” J. Biotech., 94 (3), 299 –308 (2002). http://dx.doi.org/10.1016/S0168-1656(01)00431-X JBITD4 0168-1656 Google Scholar

83. 

D. W. Fletcher-HolmesA. R. Harvey, “Real-time imaging with a hyperspectral fovea,” J. Optics A, 7 (6), S298 –S302 (2005). http://dx.doi.org/10.1088/1464-4258/7/6/007 JOAOF8 1464-4258 Google Scholar

84. 

N. Gatet al., “Development of four-dimensional imaging spectrometers (4D-IS),” Proc. SPIE, 6302 63020M (2006). http://dx.doi.org/10.1117/12.678082 PSISDG 0277-786X Google Scholar

85. 

J. Krieselet al., “Snapshot hyperspectral fovea vision system (hypervideo),” Proc. SPIE, 8390 83900T (2012). http://dx.doi.org/10.1117/12.918643 PSISDG 0277-786X Google Scholar

86. 

R. M. Wentworthet al., “Standoff Raman hyperspectral imaging detection of explosives,” in Laser Applications to Chemical, Security and Environmental Analysis, 4925 –4928 (2008). Google Scholar

87. 

M. P. NelsonP. J. Treado, “Raman imaging instrumentation,” Raman, Infrared, and Near-Infrared Chemical Imaging, 23 –54 Wiley, Hoboken, NJ (2010). Google Scholar

88. 

G. Monnet, “Application des méthodes interférentielles à la mesure des vitesses radiales. I. Montages optiques [Interference methods for the measurements of radial velocities. I. Optical mountings],” Astron. Astrophys., 9 (3), 420 –435 (1970). AAEJAF 0004-6361 Google Scholar

89. 

J. Meaburn, “Versatile nebular insect-eye Fabry-Perot spectrograph,” Appl. Opt., 14 (2), 465 –469 (1975). http://dx.doi.org/10.1364/AO.14.000465 APOPAI 0003-6935 Google Scholar

90. 

G. Courteset al., “A new device for faint objects high resolution imagery and bidimensional spectrography—first observational results with TIGER at CFHT 3.6-meter telescope,” Instrumentation for Ground-Based Optical Astronomy, 266 Springer, New York (1988). Google Scholar

91. 

R. Baconet al., “The Sauron project—I. The panoramic integral-field spectrograph,” Mon. Not. Roy. Ast. Soc., 326 (1), 23 –35 (2001). http://dx.doi.org/10.1046/j.1365-8711.2001.04612.x MNRAA4 0035-8711 Google Scholar

92. 

R. F. Peletieret al., “SAURON: integral-field spectroscopy of galaxies,” New Astron. Rev., 45 (1–2), 83 –86 (2001). http://dx.doi.org/10.1016/S1387-6473(00)00134-2 NAREF9 1387-6473 Google Scholar

93. 

H. Sugaiet al., “The Kyoto tridimensional spectrograph II on Subaru and the University of Hawaii 88-in telescopes,” Publ. Astron. Soc. Pac., 122 (887), 103 –118 (2010). http://dx.doi.org/10.1086/648997 PASPAU 0004-6280 Google Scholar

94. 

A. Bodkinet al., “Snapshot hyperspectral imaging—the hyperpixel array camera,” Proc. SPIE, 7334 73340H (2009). http://dx.doi.org/10.1117/12.818929 PSISDG 0277-786X Google Scholar

95. 

A. Bodkinet al., “Video-rate chemical identification and visualization with snapshot hyperspectral imaging,” Proc. SPIE, 8374 83740C (2012). http://dx.doi.org/10.1117/12.919202 PSISDG 0277-786X Google Scholar

96. 

SpieringB. A., “Multi spectral imaging system,” U.S. Patent No. 5900942 (1999).

97. 

Y. MurakamiM. YamaguchiN. Ohyama, “Hybrid-resolution multispectral imaging using color filter array,” Opt. Express, 20 (7), 7173 –7183 (2012). http://dx.doi.org/10.1364/OE.20.007173 OPEXFF 1094-4087 Google Scholar

98. 

OrtynW. E.BasijiD. A., “Imaging and analyzing parameters of small moving objects such as cells,” U.S. Patent No. 6211955 B1 (2003).

99. 

T. C. Georgeet al., “Distinguishing modes of cell death using the imagestream multispectral imaging flow cytometer,” Cytometry A, 59 (2), 237 –245 (2004). http://dx.doi.org/10.1002/(ISSN)1097-0320 1552-4922 Google Scholar

100. 

OrtynW. E.et al., “Blood and cell analysis using an imaging flow cytometer,” U.S. Patent No. 7925069 B2 (2009).

101. 

N. Hagenet al., “Snapshot advantage: a review of the light collection improvement for parallel high-dimensional measurement systems,” Opt. Eng., 51 (11), 111702 (2012). http://dx.doi.org/10.1117/1.OE.51.11.111702 OPEGAR 0091-3286 Google Scholar

102. 

J. D. Matchettet al., “Volume holographic beam splitter for hyperspectral imaging applications,” Proc. SPIE, 6668 66680K (2007). http://dx.doi.org/10.1117/12.733778 PSISDG 0277-786X Google Scholar

103. 

F. V. BulyginG. N. Vishnyakov, “Spectrotomography—a new method of obtaining spectrograms of two-dimensional objects,” Proc. SPIE, 1843 315 –322 (1992). http://dx.doi.org/10.1117/12.131904 PSISDG 0277-786X Google Scholar

104. 

M. R. Descour, “Non-scanning imaging spectrometry,” University of Arizona, (1994). Google Scholar

105. 

M. DescourE. Dereniak, “Computed-tomography imaging spectrometer: experimental calibration and reconstruction results,” Appl. Opt., 34 (22), 4817 –4826 (1995). http://dx.doi.org/10.1364/AO.34.004817 APOPAI 0003-6935 Google Scholar

106. 

C. E. Volinet al., “Midwave-infrared snapshot imaging spectrometer,” Appl. Opt., 40 (25), 4501 –4506 (2001). http://dx.doi.org/10.1364/AO.40.004501 APOPAI 0003-6935 Google Scholar

107. 

B. K. FordM. R. DescourR. M. Lynch, “Large-image-format computed tomography imaging spectrometer for fluorescence microscopy,” Opt. Express, 9 (9), 444 –453 (2001). http://dx.doi.org/10.1364/OE.9.000444 OPEXFF 1094-4087 Google Scholar

108. 

W. R. JohnsonD. W. WilsonG. Bearman, “All-reflective snapshot hyperspectral imager for ultraviolet and infrared applications,” Opt. Lett., 30 (12), 1464 –1466 (2005). http://dx.doi.org/10.1364/OL.30.001464 OPLEDP 0146-9592 Google Scholar

109. 

R. Shogenjiet al., “Multispectral imaging using compact compound optics,” Opt. Express, 12 (8), 1643 –1655 (2004). http://dx.doi.org/10.1364/OPEX.12.001643 OPEXFF 1094-4087 Google Scholar

110. 

B. A. Hooperet al., “Time-series imaging of ocean waves with an airborne RGB and NIR sensor,” in OCEANS, Proc. of MTS/IEEE, 1 –8 (2005). Google Scholar

111. 

S. A. Mathews, “Design and fabrication of a low-cost, multispectral imaging system,” Appl. Opt., 47 (28), F71 –F76 (2008). http://dx.doi.org/10.1364/AO.47.000F71 APOPAI 0003-6935 Google Scholar

112. 

IMEC, (2013) www.imec.be Google Scholar

113. 

M. Levoyet al., “Light field microscopy,” ACM Trans. Graph., 25 (3), 924 –934 (2006). http://dx.doi.org/10.1145/1141911 ATGRDF 0730-0301 Google Scholar

114. 

R. Horstmeyeret al., “Flexible multimodal camera using a light field architecture,” in IEEE Int. Conf. on Computational Photography, 1 –8 (2009). Google Scholar

115. 

MitchellT. A.StoneT. W., “Compact snapshot multispectral imaging system,” U.S. Patent No. 8027041 B1 (2011).

116. 

E. le Coareret al., “PYTHEAS: a multi-channel Fabry Perot spectrometer for astronomical imaging,” Astron. Astrophys. Suppl. Ser., 111 (2), 359 –368 (1995). AAESB9 0365-0138 Google Scholar

117. 

BayerB. E., “Color imaging array,” U.S. Patent No. 3971065 A (1976).

118. 

BuchsbaumP. E.MorrisM. J., “Method for making monolithic patterned dichroic filter detector arrays for spectroscopic imaging,” U.S. Patent No. 6,638,668 B2 (2003).

119. 

L. MiaoH. QiW. Snyder, “A generic method for generating multispectral filter arrays,” in Int. Conf. on Image Processing, 3343 –3346 (2004). Google Scholar

120. 

G. A. BaoneH. Qi, “Demosaicking methods for multispectral cameras using mosaic focal plane array technology,” Proc. SPIE, 6062 60620A (2006). http://dx.doi.org/10.1117/12.642425 PSISDG 0277-786X Google Scholar

121. 

J. BrauersT. Aach, “A color filter array based multispectral camera,” in 12. Workshop Farbbildverarbeitung, 5 –6 (2006). Google Scholar

122. 

R. ShresthaJ. Y. HardebergR. Khan, “Spatial arrangement of color filter array for multispectral image acquisition,” Proc. SPIE, 7875 787503 (2011). http://dx.doi.org/10.1117/12.872253 PSISDG 0277-786X Google Scholar

123. 

S.-W. Wanget al., “Concept of a high-resolution miniature spectrometer using an integrated filter array,” Opt. Lett., 32 (6), 632 –634 (2007). http://dx.doi.org/10.1364/OL.32.000632 OPLEDP 0146-9592 Google Scholar

124. 

J. MercierT. TownsendR. Sundberg, “Utility assessment of a multispectral snapshot LWIR imager,” in 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, 1 –5 (2010). Google Scholar

125. 

J. M. Eichenholzet al., “Real time megapixel multispectral bioimaging,” Proc. SPIE, 7568 75681L (2010). http://dx.doi.org/10.1117/12.842563 PSISDG 0277-786X Google Scholar

126. 

N. GuptaP. R. AsheS. Tan, “Miniature snapshot multispectral imager,” Opt. Eng., 50 (3), 033203 (2011). http://dx.doi.org/10.1117/1.3552665 OPEGAR 0091-3286 Google Scholar

127. 

H. HalpertB. L. Musicant, “N-color (Hg,Cd)Te photodetectors,” Appl. Opt., 11 (10), 2157 –2161 (1972). http://dx.doi.org/10.1364/AO.11.002157 APOPAI 0003-6935 Google Scholar

128. 

A. RogalskiJ. AntoszewskiL. Faraone, “Third-generation infrared photodetector arrays,” J. Appl. Phys., 105 (9), 091101 (2009). http://dx.doi.org/10.1063/1.3099572 JAPIAU 0021-8979 Google Scholar

129. 

G. C. Gerhard, “A multispectral image sensor array,” Proc. IEEE, 59 (12), 1718 (1971). http://dx.doi.org/10.1109/PROC.1971.8535 IEEPAD 0018-9219 Google Scholar

130. 

M. N. Abedinet al., “Multicolor focal plane array detector technology: a review,” Proc. SPIE, 5152 279 –288 (2003). http://dx.doi.org/10.1117/12.505887 PSISDG 0277-786X Google Scholar

131. 

X. C. Sunet al., “Multispectral pixel performance using a one-dimensional photonic crystal design,” App. Phys. Lett., 89 (22), 223522 (2006). http://dx.doi.org/10.1063/1.2400069 APPLAB 0003-6951 Google Scholar

132. 

P. Parreinet al., “Multilayer structure for a spectral imaging sensor,” Appl. Opt., 48 (3), 653 –657 (2009). http://dx.doi.org/10.1364/AO.48.000653 APOPAI 0003-6935 Google Scholar

133. 

P. Mitraet al., “Multispectral long-wavelength quantum-well infrared photodetectors,” Appl. Phys. Lett., 82 (19), 3185 –3187 (2003). http://dx.doi.org/10.1063/1.1573354 APPLAB 0003-6951 Google Scholar

134. 

A. Longoniet al., “The transverse field detector (TFD): a novel color-sensitive CMOS device,” IEEE Electron Device Lett., 29 (12), 1306 –1309 (2008). http://dx.doi.org/10.1109/LED.2008.2006284 EDLEDZ 0741-3106 Google Scholar

135. 

K. Kishinoet al., “Resonant cavity-enhanced (RCE) photodetectors,” IEEE J. Quant. Electron., 27 (8), 2025 –2034 (1991). http://dx.doi.org/10.1109/3.83412 IEJQA7 0018-9197 Google Scholar

136. 

J. Wanget al., “Cavity-enhanced multispectral photodetector using phase-tuned propagation: theory and design,” Opt. Lett., 35 (5), 742 –744 (2010). http://dx.doi.org/10.1364/OL.35.000742 OPLEDP 0146-9592 Google Scholar

137. 

F. ZaragaG. LangfelderA. Longoni, “Implementation of an interleaved image sensor by means of the filterless transverse field detector,” J. Electron. Imaging, 19 (3), 033013 (2010). http://dx.doi.org/10.1117/1.3483905 JEIME5 1017-9909 Google Scholar

138. 

B. Lyot, “Un monochromateur à grand champ utilisant les interférences en lumière polarisée [Wide field monochromator using polarized light interference],” Comptes Rendus de l’Academie des Sciences, 197 1593 –1595 (1933). Google Scholar

139. 

A. TitleW. Rosenberg, “Research on spectroscopic imaging, vol. 2: reference literature,” Lockheed Palo Alto Research Laboratories, (1979). Google Scholar

140. 

B. Lyot, “Le filtre monochromatique polarisant et ses applications en physique solaire,” Ann. d’Astrophysique, 7 31 –79 (1944). Google Scholar

141. 

J. W. Evans, “The birefringent filter,” J. Opt. Soc. Am., 39 (3), 229 –242 (1949). http://dx.doi.org/10.1364/JOSA.39.000229 JOSAAH 0030-3941 Google Scholar

142. 

A. GormanD. W. Fletcher-HolmesA. R. Harvey, “Generalization of the Lyot filter and its application to snapshot spectral imaging,” Opt. Express, 18 (6), 5602 –5609 (2010). http://dx.doi.org/10.1364/OE.18.005602 OPEXFF 1094-4087 Google Scholar

143. 

G. WongR. PilkingtonA. R. Harvey, “Achromatization of Wollaston polarizing beam splitters,” Opt. Lett., 36 (8), 1332 –1334 (2011). http://dx.doi.org/10.1364/OL.36.001332 OPLEDP 0146-9592 Google Scholar

144. 

A. A. WagadarikarM. E. GehmD. J. Brady, “Performance comparison of aperture codes for multimodal, multiplex spectroscopy,” Appl. Opt., 46 (22), 4932 –4942 (2007). http://dx.doi.org/10.1364/AO.46.004932 APOPAI 0003-6935 Google Scholar

145. 

S. B. Mendeet al., “Hadamard spectroscopy with a two-dimensional detecting array,” Appl. Opt., 32 (34), 7095 –7105 (1993). http://dx.doi.org/10.1364/AO.32.007095 APOPAI 0003-6935 Google Scholar

146. 

S. T. McCainet al., “Coded aperture Raman spectroscopy for quantitative measurements of ethanol in a tissue phantom,” Appl. Spectrosc., 60 (6), 663 –671 (2006). http://dx.doi.org/10.1366/000370206777670693 APSPA4 0003-7028 Google Scholar

147. 

M. E. GehmD. J. Brady, “High-throughput hyperspectral microscopy,” Proc. SPIE, 6090 609007 (2006). http://dx.doi.org/10.1117/12.644828 PSISDG 0277-786X Google Scholar

148. 

A. A. Wagadarikaret al., “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express, 17 (8), 6368 –6388 (2009). http://dx.doi.org/10.1364/OE.17.006368 OPEXFF 1094-4087 Google Scholar

149. 

V. Studeret al., “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Nat. Acad. Sci. U.S.A., 109 (26), E1679 –E1687 (2012). http://dx.doi.org/10.1073/pnas.1119511109 PNASA6 0027-8424 Google Scholar

150. 

E. J. CandèsY. Plan, “A probabilistic and RIPless theory of compressed sensing,” IEEE Trans. Inf. Theory, 57 (11), 7235 –7254 (2011). http://dx.doi.org/10.1109/TIT.2011.2161794 IETTAW 0018-9448 Google Scholar

151. 

D. S. KittleD. L. MarksD. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng., 51 (7), 071403 (2012). http://dx.doi.org/10.1117/1.OE.51.7.071403 OPEGAR 0091-3286 Google Scholar

152. 

L. Gaoet al., “Depth-resolved image mapping spectrometer (IMS) with structured illumination,” Opt. Express, 19 (18), 17439 –17452 (2011). http://dx.doi.org/10.1364/OE.19.017439 OPEXFF 1094-4087 Google Scholar

153. 

M. W. KudenovE. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express, 20 (16), 17973 –17986 (2012). http://dx.doi.org/10.1364/OE.20.017973 OPEXFF 1094-4087 Google Scholar

154. 

K. OkaT. Kaneko, “Compact complete imaging polarimeter using birefringent wedge prisms,” Opt. Express, 11 (13), 1510 –1519 (2003). http://dx.doi.org/10.1364/OE.11.001510 OPEXFF 1094-4087 Google Scholar

155. 

C. F. Cullet al., “Identification of fluorescent beads using a coded aperture snapshot spectral imager,” Appl. Opt., 49 (10), B59 –B71 (2010). http://dx.doi.org/10.1364/AO.49.000B59 APOPAI 0003-6935 Google Scholar

156. 

P. B. Fellgett, “The theory of infra-red sensitivities and its application to investigations of stellar radiation in the near infra-red,” University of Cambridge, (1951). Google Scholar

157. 

P. Jacquinot, “The luminosity of spectrometers with prisms, gratings, or Fabry-Perot etalons,” J. Opt. Soc. Am., 44 (10), 761 –765 (1954). http://dx.doi.org/10.1364/JOSA.44.000761 JOSAAH 0030-3941 Google Scholar

158. 

P. GriffithsH. J. SloaneR. W. Hannah, “Interferometers vs monochromators: separating the optical and digital advantages,” Appl. Spectrosc., 31 (6), 485 –495 (1977). http://dx.doi.org/10.1366/000370277774464048 APSPA4 0003-7028 Google Scholar

159. 

L. W. SchumannT. S. Lomheim, “Infrared hyperspectral imaging Fourier transform and dispersive spectrometers: comparison of signal-to-noise based performance,” Proc. SPIE, 4480 1 –14 (2002). http://dx.doi.org/10.1117/12.453326 PSISDG 0277-786X Google Scholar

160. 

R. G. SellarG. D. BoremanL. E. Kirkland, “Comparison of signal collection abilities of different classes of imaging spectrometers,” Proc. SPIE, 4816 389 –396 (2002). http://dx.doi.org/10.1117/12.451649 PSISDG 0277-786X Google Scholar

161. 

R. A. KellerT. S. Lomheim, “Imaging Fourier transform spectrometer (IFTS): parametric sensitivity analysis,” Proc. SPIE, 5806 267 –287 (2005). http://dx.doi.org/10.1117/12.605885 PSISDG 0277-786X Google Scholar

162. 

J. HarlanderR. J. ReynoldsF. L. Roesler, “Spatial heterodyne spectroscopy for the exploration of diffuse interstellar emission lines at far-ultraviolet wavelengths,” Astrophys. J., 396 (2), 730 –740 (1992). http://dx.doi.org/10.1086/171756 ASJOAB 0004-637X Google Scholar

163. 

J. M. Harlanderet al., “Shimmer: a spatial heterodyne spectrometer for remote sensing of Earth’s middle atmosphere,” Appl. Opt., 41 (7), 1343 –1352 (2002). http://dx.doi.org/10.1364/AO.41.001343 APOPAI 0003-6935 Google Scholar

164. 

S. Watchornet al., “Sunlight fluorescence observations at 589 nm with the SHIELDS spectrometer system: a progress report,” Proc. SPIE, 7812 781207 (2010). http://dx.doi.org/10.1117/12.863173 PSISDG 0277-786X Google Scholar

165. 

M. W. Kudenovet al., “Spatial heterodyne interferometry with polarization gratings,” Opt. Lett., 37 (21), 4413 –4415 (2012). http://dx.doi.org/10.1364/OL.37.004413 OPLEDP 0146-9592 Google Scholar

166. 

D. F. Barbe, “Time delay and integration image sensors,” Solid State Imaging, 659 –671 NATO Advanced Science Institutes (ASI), Leyden, Netherlands (1975). Google Scholar

167. 

H.-S. WongY. L. YaoE. S. Schlig, “TDI charge-coupled devices: design and applications,” IBM J. Res. Devel., 36 (1), 83 –106 (1992). http://dx.doi.org/10.1147/rd.361.0083 IBMJAE 0018-8646 Google Scholar

168. 

G. LepageJ. BogaertsG. Meynants, “Time-delay-integration architectures in CMOS image sensors,” IEEE Trans. Electron Devices, 56 (11), 2524 –2533 (2009). http://dx.doi.org/10.1109/TED.2009.2030648 IETDAI 0018-9383 Google Scholar

169. 

X.-F. HeO. Nixon, “Time delay integration speeds up imaging,” Photon. Spectra, 46 (5), 50 –53 (2012). PHSAD3 0731-1230 Google Scholar

170. 

Handbook of Biological Confocal Microscopy, 3rd ed.Springer, New York (2006). Google Scholar

171. 

N. HagenL. GaoT. S. Tkaczyk, “Quantitative sectioning and noise analysis for structured illumination microscopy,” Opt. Express, 20 (1), 403 –413 (2012). http://dx.doi.org/10.1364/OE.20.000403 OPEXFF 1094-4087 Google Scholar

172. 

J. E. Sánchezet al., “Review and implementation of the emerging CCSDS recommended standard for multispectral and hyperspectral lossless image coding,” in 2011 First Int. Conf. on Data Compression, Communications and Processing, 222 –228 (2011). Google Scholar

Biography

OE_52_9_090901_d001.png

Nathan Hagen graduated with a PhD degree in optical sciences at the University of Arizona in 2007, studying snapshot imaging spectrometry and spectropolarimetry (including CTIS). From 2007 to 2009, he worked as a postdoc at Duke University, developing imaging and spectrometry techniques (including CASSI). From 2009 to 2011 he worked as a research scientist at Rice University, continuing work on imaging and spectrometry, including development work on the IMS imaging spectrometer. In 2011 he joined the newly formed Rebellion Photonics, to help develop snapshot imaging spectrometers as commercial products.

OE_52_9_090901_d002.png

Michael W. Kudenov completed his BS degree in electrical engineering at the University of Alaska in 2005 and his PhD in optical sciences at the University of Arizona in 2009. He is currently an assistant professor in the Electrical and Computer Engineering Department at North Carolina State University. His research focuses on visible-light and thermal-infrared spectrometer, interferometer, and polarimeter systems development and calibration. Applications span disease detection, chemical identification, remote sensing, fluorescence imaging, polarimetry, spectroscopy, polarization ray tracing and aberrations, and profilometry.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Nathan A. Hagen and Michael W. Kudenov "Review of snapshot spectral imaging technologies," Optical Engineering 52(9), 090901 (23 September 2013). https://doi.org/10.1117/1.OE.52.9.090901
Published: 23 September 2013
Lens.org Logo
CITATIONS
Cited by 520 scholarly publications and 67 patents.
Advertisement
Advertisement
KEYWORDS
Sensors

Detector arrays

Imaging spectroscopy

Imaging systems

Spectrometers

Optical filters

Image filtering

RELATED CONTENT


Back to Top