Open Access
7 June 2017 Snapshot imaging spectrometry with a heterodyned Savart plate interferometer
Author Affiliations +
Abstract
Imaging spectrometers are frequently used in remote sensing for their increased target discrimination capabilities over conventional imaging. Increasing the spectral resolution of these sensors further enables the system’s ability to discriminate certain targets and adds the potential for monitoring narrow-line spectral features. We describe a high spectral resolution (Δλ=1.1  nm full-width at half maximum) snapshot imaging spectrometer capable of distinguishing two narrowly separated bands in the red-visible spectrum. A theoretical model is provided to detail the first polarization grating-based spatial heterodyning of a Savart plate interferometer. Following this discussion, the experimental conditions of the narrow-line imaging spectrometer (NLIS) are provided. Finally, calibration and target identification methods are applied and quantified. Ultimately it is demonstrated that in a full spectral acquisition the NLIS sensor is capable of less than 3.5% error in reconstruction. Additionally, it is demonstrated that neural networks provide greater than 99% reduction in crosstalk when compared to pseudoinversion and expectation maximization in single target identification.

1.

Introduction

Spectral imaging sensors are regularly implemented in remote sensing environments.13 A comprehensive review of existing snapshot hyperspectral imaging sensors is provided in Refs. 4, 5. Generally, these sensors represent a diverse array of optical methodologies with complex spatial, spectral, and temporal resolution trade spaces. In this work, we make improvements to the spectral resolution trade-space of a snapshot Fourier transform imaging spectrometer6 through the implementation of polarization grating (PG)-based spatial heterodyning.7 While past work represents contributions to this trade-space, the narrow-line imaging spectrometer (NLIS) developed in this work demonstrates nanometer-scale spectral resolution, and is the first snapshot imaging spectrometer that leverages Savart plates.

High spectral resolution imaging spectrometers are often implemented for increased discrimination and the ability to monitor atomic transitions.8 Such systems are generally implemented using dispersive or grating elements.9 In the developed system, high spectral resolution is achieved using a field-widened calcite Savart plate.10,11,12 While Savart plate imaging spectrometry has been demonstrated in the past,13,14 snapshot imaging architectures have not been previously described. Additionally, for the first time, spatial heterodyning of a Savart plate’s interference fringes using PGs is demonstrated.6,1517 Spatial heterodyning improves the signal-to-noise ratio (SNR) of the system by reducing the frequency of the Savart plate’s interference fringes. When heterodyning high frequency interferograms to a lower frequency, the contrast of the fringes is improved as a result of the sensor’s MTF, thereby increasing the measured SNR.7

Additionally, the NLIS system is experimentally demonstrated for target identification using the acquired interferograms directly, without calibrating the interferograms to the spectral domain. This process was implemented using both expectation maximization and neural networks, and the target discrimination performance of each was compared. As a basis for comparison, data from an outdoor scene containing closely spaced spectral lines from a model rocket engine were used. Similar to recent work,18 the neural network approach provides superior results. Specifically, it is demonstrated that the neural network approach eliminates crosstalk, which the expectation maximization approach suffers from. For additional sensor validation, a conventional spectral calibration is performed using expectation maximization. This method is then validated on a laboratory scene. This paper is organized as follows: a detailed model along with experimental parameters of the NLIS sensor are presented in Sec. 2, followed by calibration methodology in Sec. 3, and results in Sec. 4.

2.

System Design and Theoretical Model

Unique to this implementation is the high spectral resolution of the NLIS sensor, which is attained using a Savart plate interferometer (SPI).13,19 To reduce the frequency of the SPI’s fringes, PGs are implemented for spatial heterodyning.7,15 To develop a model for the NLIS sensor, it is advantageous to first consider its constituent components independently. In Fig. 1(a), a conventional SPI is illustrated, along with a PG interferometer in Fig. 1(b). The SPI in Fig. 1(a) consists of a reimaging lens with focal length fr, two beam displacers with fast axes at plus (BD1) and minus (BD2) 45 deg in the yz plane, respectively, separated by a half-wave plate (HWP1) with fast axis at 45 deg in the yx plane, between a (LP1) generating and (LP2) analyzing linear polarizer at 45 deg. Polarized light from LP1 enters the first beam displacer and is split into an ordinary (O) and an extraordinary (E) ray. The split beams then pass through HWP1, where they are rotated to their orthogonal linear state. Following rotation, each beam is then passed through BD2 where the vertical state is refracted in the y direction, to produce EO and OE rays.

Fig. 1

Narrow line imaging spectrometer fundamental components, the (a) SPI and a (b) polarization grating interferometer.

OE_56_8_081806_f001.png

Exiting the Savart plate apparatus are two collimated beams separated by a shear10

Eq. (1)

SSP=2tSPne2no2ne2+no2,
where ne and no are the extraordinary and ordinary indices of refraction, respectively, and tsp is the thickness of the beam displacers. When combined with the reimaging lens and focused on a focal plane array (FPA), the measured interference has the form

Eq. (2)

ISP(y,λ)=I08[1+cos(2πySSPλfr)],
where I0 is the incident intensity and λ is the wavelength of light. Meanwhile, the PG interferometer is considered in Fig. 1(b). This setup consists of two PGs with orthogonal grating vectors between two parallel linear polarizers at 45 deg. In this configuration, light transmitted by LP1 becomes linearly polarized and transmits to PG1, where it is diffracted into right- and left-circular polarization states. These diffracted rays diverge for a distance tPG, before encountering PG2 where they are retrodiffracted and emerge collimated, generating a chromatic shear of

Eq. (3)

SPG=2tPGλΛ,
where Λ is the period of the PGs. When these sheared rays are focused using a lens with focal length fr, interference fringes are generated with intensity profile

Eq. (4)

IPG(y,λ)=I08[1+cos(4πtPGyΛfr)].

Thus, interference fringes are produced that have no wavelength dependence, making the stacked PGs ideal for introducing a wavelength-dependent offset to perform heterodyning. Combining the systems from Figs. 1(a) and 1(b) yields the NLIS’s operational concept. This is shown in Fig. 2 which demonstrates a heterodyned SPI.

Fig. 2

Polarization grating heterodyned SPI, showing the heterodyne wavelength ray path.

OE_56_8_081806_f002.png

As shown in Fig. 2, the sheared beams from the Savart plate encounter a quarter-wave plate (QWP), where they are converted from vertical and horizontal polarization states into right and left circular polarization states, respectively. This polarization conversion prevents the states from being split again when encountering the PGs. These sheared circularly polarized beams then encounter the PG group, where they are diffracted toward the optical axis. Interference generated with this system can be modeled as

Eq. (5)

ISPH(y,λ)=I08[1+cos(2πySSPλfr4πtPGyΛfr)],
where the heterodyne wavenumber σh occurs when the frequency of the fringes is zero, and is defined as

Eq. (6)

σh=tPGΛSSP.

The final system is shown in Fig. 3 and follows directly from the heterodyned SPI in Fig. 2. In this final implementation, the shearing phenomenon from the PGs and the Savart plate is occurring simultaneously, instead of consecutively as shown in Fig. 2. Additionally, to extend this methodology to division of aperture, snapshot imaging spectrometry,20 the interferometer system is now considered with imaging components. A fore optic couples light into the lenslet array,21 where it is then collimated and passed into the interferometer. Diffracted light from PG1 and PG2 is converted from circular to linear states by QWP1, where it is then sheared by the Savart plate elements. Linearly polarized, sheared beams exiting the Savart plate are converted to right and left circular polarization states via QWP2, which are then retrodiffracted by PG3 and PG4, emerging parallel to the optical axis. Finally, the circular states exiting the second PG grouping transmit through an analyzing polarizer LP2. These beams are then focused by a reimaging lens, with focal length fr, to produce a focal plane that is coincident with the fringe localization plane.

Fig. 3

Final narrow line imaging spectrometer design based on a field widened Savart plate, with four polarization gratings for spatial heterodyning, along with imaging components.

OE_56_8_081806_f003.png

An additional attribute of the final system is that it incorporates two sets of PGs, adding an additional degree of freedom for tuning the grating period to adjust for tolerancing errors in the beam displacers. Placing two PGs in direct contact, and counter rotating them, enables us to modify the effective grating period such that

Eq. (7)

Λeff=(Λ1+Λ2)cos(θPG2),
where Λ1 and Λ2 are the periods of each grating, and θPG is the angle between their grating vectors.22

Assuming all four PGs have the same period ΛPG, and the angle between PG1 and PG2 is equal to the angle between PG3 and PG4, the interference on the detector has the distribution

Eq. (8)

ISPH(y,λ)=I08[1+cos(2πySSPλfr4πtPGyΛefffr)],
where the heterodyne wavenumber can be calculated as

Eq. (9)

σh=tPGSSPΛeff.

3.

Experimental Setup

Based on the design shown in Fig. 3, a system was constructed around an Allied Vision Technologies GX2750, utilizing a 6-megapixel Sony ICX694 FPA. In the system, shown in Fig. 4, the fore optic consists of a Nikkor F/1.2 50-mm focal length objective that focuses light onto a 25×18  mm field stop coincident with a fiber faceplate, which is used to eliminate parallax. Light from the faceplate is collimated by a 50-mm achromatic doublet consisting of a Thorlabs AC254-050-B (fc1=50  mm) into the 5×5 lenslet array, which was constructed from two stacked arrays with 1.5-mm focal lengths each. The subimages, formed by the lenslet array, were collimated by a Nikkor F/1.4 50 mm collimation lens (fc2=50  mm). An Omega Optical 50-mm-diameter bandpass filter, with 10% transmission cutoffs at 763 and 775 nm, was used to limit the spectral pass band. Light then transmits through the PGs and Savart plate, where it undergoes beam displacement. Lastly, the reimaging lens is a Nikkor F/1.4 58 mm focal length lens. This experimental configuration creates subimages that are 500×380  pixels, which is equal to the spatial resolution of the reconstructed data.

Fig. 4

NLIS sensor schematic with a fore optic, as well as system optomechanics, and element placement.

OE_56_8_081806_f004.png

In this system, the Savart plate was constructed from two calcite beam displacers, each with a thickness of tSP=13.4  mm. The high spectral resolution of the system is a manifestation of the Savart plate’s thickness, reimaging lens focal length, and lateral extent of the detector, where the full-width at half maximum (FWHM) spectral resolution can be calculated as

Eq. (10)

Δσ=0.6×frSSPymax,
where fr=58  mm and ymax is the FPA’s maximum sampling distance from the optical axis. For the Sony ICX694, ymax=6.37  mm, accounting for the coordinate system’s rotation. For calcite at λ=767.5  nm, no=1.6499, and ne=1.4824.23 From this, the resolution can be calculated as Δσ=19.11  cm1 or Δλ=1.12  nm.

Finally, the orientations of the polarization elements’ Eigen or grating vectors have been referenced to the Savart plate’s fast axis (or shearing direction), which is oriented at an angle δ from the global x-axis. For an N×M lenslet array, δ is given as20

Eq. (11)

δ=tan1(1M).
For the 5×5 lenslet array used, δ=11.3  deg. Thus, PG1 and PG3 have a grating vector oriented at 32.6 deg while PG2 and PG4 have grating vectors oriented at 32.6  deg, while all four PGs were fabricated with a period of Λ=25.6  μm. Utilizing Eq. (7), the effective grating period Λeff=43.1  μm. Finally, the spacing between the PG groupings tPG=35.2  mm.

Lastly, a summary of the optical performance parameters is provided in Table 1.

Table 1

Optical performance parameters for the narrow line imaging spectrometer.

Spectral pass-band763 to 775 nm
Spectral resolution19.11  cm1
Half-angle FOV (on diagonal)16 deg
Datacube size500×380×25

4.

Calibration

With the constructed NLIS sensor, two means of spectral calibration were pursued. One approach used linear unmixing-based and neural network-based methods for direct target identification using interferogram data. Alternatively, a conventional spectral calibration using expectation maximization was performed for additional sensor validation.

4.1.

Target Identification

Initially, instead of transforming the system’s measurements to the spectral domain, acquired data were used directly for target identification. The system’s interferograms are unique to the spectral profile of the scene, and thus it is possible to discriminate scene objects based on their spectral distribution. Similar approaches, such as the “smashed filter” in compressive sensing, also leverage the multiplexed sensor measurements for similar purposes.24 This process is guided by linear unmixing given by

Eq. (12)

g=Hf,
where H is a 25×2 systems matrix containing the basis interferograms, g is an interferogram from an unknown scene, and f is a vector containing abundance coefficients for the basis interferograms.

As a consequence of the narrow band nature of the system, only two principle components are required, HK, the narrow line or target component, and HB, the background component. Using this methodology, Eq. (12) can be expanded to

Eq. (13)

g=HKfK+HBfB,
where fK and fB are the mixing, or abundance coefficients,25 for the narrow line and background interferograms, respectively. This methodology assumes that the target and background adequately represent the spectral content of unknown scenes. While there may be concern that the narrow line and background images may provide an incomplete basis, particularly due to other lines in the waveband, such lines are not expected in environments of interest. In the event additional lines become a concern, such shortcomings could easily be accommodated for by adding additional basis images.

To obtain the H matrix for single-target detection, three integrating sphere images were required, a flatfield image, an image from a tungsten-halogen source, and an image of a high pressure sodium (HPS) source. The raw data frames for each image are shown in Fig. 5, in addition to the setup for obtaining each image in Fig. 6.

Fig. 5

NLIS raw data image of a (a) flatfield, (b) tungsten source, and (c) HPS source.

OE_56_8_081806_f005.png

Fig. 6

Experimental setup used for acquiring calibration data for linear unmixing-based target identification.

OE_56_8_081806_f006.png

Additionally, to ensure similarity of the HPS lamp to the desired target source, the potassium lines in the lamp were measured using a high-resolution optical spectrum analyzer. This spectrum is shown in Fig. 7.

Fig. 7

Potassium lines in the high-pressure sodium lamp measured with an optical spectrum analyzer.

OE_56_8_081806_f007.png

As shown in Fig. 7, the lines have FWHM of 0.1 nm, measured using Gaussian fitting, which is well below the resolution of the sensor. Thus, the effect of high-pressure line broadening is negligible.

To acquire calibration images of the tungsten or HPS images, light from either lamp was directed into an integrating sphere using a fiber. Additionally, a flatfield was constructed using two images of the tungsten source where LP2 is rotated 90 deg between each image to produce two frames with fringes 180 deg out of phase. The two flatfield images were averaged to produce a single fringeless frame as shown in Fig. 5(a).

Using the images in Fig. 5, each basis image was divided by the flatfield image to remove the influence of vignetting. Following flatfield division, static image registration coefficients were applied to the tungsten and HPS images to produce two calibration datacubes. The tungsten and HPS datacubes were used to generate HB and HK at each object point, respectively. These two datacubes were the basis of the neural network training methodology as shown in Fig. 8.

Fig. 8

H-matrix-based neural network training, training data are constructed by randomly weighting the basis vectors in H by a value f and summing all weighted vectors together to produce g. Doing so conditions the network to take a measurement from an unknown scene and produce abundance values for the basis vectors.

OE_56_8_081806_f008.png

In this context, target detection is commonly accomplished by solving Eq. (12) using pseudoinversion such that

Eq. (14)

f=H1g.

As an addition to calibrating using pseudoinversion, in this work, Eq. (14) was also solved using expectation maximization and neural networks. A comparison of performance for all three methods is provided in Sec. 5.1.

Neural network-based spectral calibration has been performed in past work.6,18 Unique to this work, training data were constructed leveraging random linear superposition of interferograms, instead of measuring random spectral distributions directly. To construct training data, the NLIS sensor’s systems measurement matrix H was used as shown in Fig. 8. This was done by generating a series of random abundance vectors in f and assembling them into an array to form ftrain, which served as output (target) values during training. From this, establishing the input values for training, gtrain, was a matter of multiplying by H, such that

Eq. (15)

gtrain=Hftrain.
After constructing the training matrices gtrain and ftrain, the MATLAB® neural network toolbox was used to train a 16-node cascade forward neural network, using scaled-conjugate gradient backpropagation. It should be noted that by default, MATLAB® normalizes the training arrays gtrain and ftrain to the range [1,1].

To calibrate the entire field of view, a unique H matrix was constructed for each pixel in the field and used to train a corresponding network. For an unknown scene, each interferogram was calibrated by its corresponding network thus producing two abundance images, IK and IB.

Following this procedure, each network was adapted to take a sensor measurement g, from an unknown scene, and produce relative abundance values f, for the basis spectra HK and HB. Calibrating the raw spectral data in this way enables us to operate directly on the system interferograms to determine the localization of the target. The use of interferograms directly for target identification, with neural networks, represents a unique aspect of this work.

Results comparing pseudoinversion, an EM-based technique and the neural network approach are included in Sec. 5.1, but first a traditional spectral calibration with EM is considered.

4.2.

Full Spectral Calibration

To expand the capabilities of the NLIS system beyond single target detection, we present an EM-based conventional spectral calibration. The methodology behind this method is similar to target identification in that calibration is modeled using Eq. (12). However, for the full spectral acquisition, H was populated with a series of monochromatic interferograms, H1 through Hn, such that n spectra were collected across the sensor’s bandpass. Each monochromatic interferogram was obtained using the monochromator setup shown in Fig. 9, where light from a xenon arc lamp was passed through a monochromator, and monochromatic spectra were passed to an integrating sphere, which was then measured with the NLIS.

Fig. 9

Monochromator-based experimental setup for acquiring interferograms to be using in an EM-based spectral calibration.

OE_56_8_081806_f009.png

Using this formalism for the H matrix, Eq. (12) can be expanded to

Eq. (16)

g=H1f1+H2f2++Hnfn,
where f1 to fn are the abundance coefficients for the monochromatic interferograms. By solving Eq. (16), each fn can be determined, and subsequently the spectral distribution of an unknown scene. To produce a spectral image, this process is repeated for each pixel in the scene.

Utilizing the setup in Fig. 9, 20 monochromatic images were acquired linearly spaced in wavenumber from 762 to 778 nm, with an FWHM resolution of 0.8 nm. In Fig. 10(a), each raw data frame of the monochromatic integrating sphere images is depicted, along with an example H matrix, showing H1 through H20 in Fig. 10(b).

Fig. 10

(a) Raw integrating sphere data with monochromatic illumination and (b) an example H matrix, used for traditional spectral calibration.

OE_56_8_081806_f010.png

Implementing the above methodology, results using 100 iterations of expectation maximization are presented in Sec. 5.2.

5.

Results

For the NLIS sensor, tests were performed for both full spectral acquisition and target identification. The results for these experiments are discussed in the next two sections.

5.1.

Target Detection Results

To test the target detection calibration technique, an outdoor scene was constructed using a tungsten-halogen lamp and a model rocket engine, which generates narrow-line spectra. Measurements of the tungsten lamp and the HPS lamp were used as basis measurements. A separate measurement of the rocket combustion was used to verify the spectral similarities between the rocket, and HPS lamp in the sensor spectral band. In this test, the tungsten lamp was aimed directly into the sensor to provide an additional near point source comparable in brightness to the rocket. In the combustion process, the motor generates narrow emission at 766.48 and 769.89 nm. In Fig. 11(a), a visible light photo of the scene is illustrated, along with a panchromatic image from the NLIS sensor as shown in Fig. 11(b).

Fig. 11

(a) Outdoor scene of an Estes model rocket and a tungsten-halogen lamp acquired with a visible light color camera. (b) A panchromatic image of the same scene from the NLIS sensor.

OE_56_8_081806_f011.png

Following data acquisition, the calibration was tested using both the EM and neural network approaches as developed in Sec. 4.1. In Figs. 12 and 13, color fused results of the calibration of a video sequence are shown for the expectation maximization approach and the neural network approach, respectively; the contrast has been stretched to show detail in the color fusion.

Fig. 12

Expectation maximization-based unmixing of several frames from the model rocket scene. Regions where narrow lines are highlighted with red color fusion.26,27 With this method, false detection is noted in regions where areas of interest are not present.

OE_56_8_081806_f012.png

Fig. 13

Neural network-based unmixing of several frames from the model rocket scene. Regions where narrow line responses are highlighted with red color fusion.

OE_56_8_081806_f013.png

To further quantify these results, the narrow line channel IK, generated from pseudoinversion, expectation maximization, and neural networks is considered. To do so, an error metric crosstalk is defined such that

Eq. (17)

crosstalk=i=1Xj=1YIK(x,y)W(x,y)Irocket,
where X and Y define the abundance image size, W is a windowing function, x and y are pixel coordinates, and Irocket is the peak on rocket abundance value. To eliminate the contribution of the source in the error metric, W is defined such that

Eq. (18)

W(x,y)={0,x,yon rocket1,otherwise.

This metric assumes there are no sources of narrow line bands anywhere other than at the motor, which for a scene of mostly vegetation is a reasonable assumption. In Fig. 14, the windowed abundance images for each calibration technique are presented.

Fig. 14

Windowed narrow line images IK for the (a) pseudoinversion, (b) expectation maximization, and (c) neural network-based calibration techniques.

OE_56_8_081806_f014.png

Using the images from Fig. 14, crosstalk is calculated for each method and the results are presented in Table 2.

Table 2

Crosstalk comparison for each of the three target identification-based calibration methods.

PseudoinverseExpec. Max.Neural networks
Crosstalk6561.922,3989.032

Comparing the three methods demonstrates that the neural network approach is superior in preventing channel crosstalk. Using the neural network approach, a 99.86% and 99.96% decrease in crosstalk is demonstrated when compared to pseudoinversion and expectation maximization, respectively. The color-fused results from the neural network approach showed no narrow spectral line localization except where the rocket engine was located. By comparison, the expectation maximization method, showed multiple regions where narrow line detection is present. In detection, the neural network approach would provide less risk of false-positive signatures.

One possible explanation for the increase in performance could be that the neural network is better equipped to reject crosstalk due to stray light. However, this is less likely since all surfaces and elements were either index matched or AR coated, the sensor was contained in matte black housing, and optical grade calcite was used. A more likely explanation could be that detection with raw interferograms results in poor conditioning of the measurement matrix, H.6,18

5.2.

Full Spectral Calibration Results

Utilizing the techniques developed in Sec. 4.2, a conventional spectral calibration was performed and a scene with various illumination sources and regions of shadowing was measured. A pictorial representation of the measured scene is shown in Fig. 15.

Fig. 15

Lab-based scene for testing the measurement matrix-based full spectral calibration. The scene was created to have regions with purely tungsten-halogen illumination, purely HPS illumination, and a combination of the two.

OE_56_8_081806_f015.png

For each of the illumination regions from Fig. 15, the measured spectrum is illustrated along with a panchromatic image of the scene in Fig. 16.

Fig. 16

Results of the expectation maximization-based calibration showing (a) band integrated scene image and (b) spectra from each of the three illumination regions.

OE_56_8_081806_f016.png

From this experiment, it is possible to measure the spectral resolution of the system, and the peaks in Fig. 16 have an FWHM resolution of 1.1 nm, measured using Gaussian fitting, which validates the model. Due to the high spectral resolution nature of the NLIS sensor, it is possible to resolve the narrow lines, spaced 3.41 nm apart.

For further analysis, the HPS spectra measured using the NLIS sensor, and an Ocean Optics HR4000 spectrometer is compared to NIST data in Fig. 17.

Fig. 17

HPS lamp spectrum measured using an Ocean Optics HR4000 spectrometer, and the NLIS sensor compared to the NIST truth spectrum.

OE_56_8_081806_f017.png

Taking the NIST potassium lines with 1.1-nm FWHM resolution as truth, RMS errors of 0.1461 and 0.1754 are realized with the NLIS, and the Ocean Optics spectrometer, respectively.

As another metric, the absolute spacing between the dual peaks of the spectra is compared as obtained by the NLIS, and as documented in the literature. According to the literature values, the spacing between the peaks is expected to be 3.41 nm. However, using the spacing between the local maxima of each peak, a spacing of 3.3 nm was measured with the NLIS, resulting in an error of 3.23%, which is within our error of 0.65 nm due to the granularity of H samples. Additionally, a spacing of 3.4 nm was measured with the ocean optics spectrometer, resulting in an error of 0.29%.

It is also useful to examine the performance of the sensor in relative radiometric accuracy. Per Ref. 28, the relative peak intensity, defined as the ratio of the 770 nm line intensity, to the 766 nm line intensity is 0.96. As shown in Fig. 17, the NLIS sensor measures a ratio of 0.83, and the Ocean Optics spectrometer measures a ratio of 0.67, resulting in percent errors of 13.5% and 30.2%, respectively.

6.

Conclusions

A narrow band, high spectral resolution, imaging Fourier transform spectrometer capable of narrow line discrimination has been developed. The prototype system demonstrates a spatial resolution of 500×380 with a spectral resolution of 19.11  cm1. Additionally, in system calibration, an error of 3.23% in the peak separation of the two narrow spectral lines was demonstrated. Lastly, similar to past work, it was shown that neural networks provide superior performance in calibration. Specifically, neural networks provided a greater than 99% reduction in crosstalk in single target detection techniques.

Acknowledgments

This work acknowledges support from SA Photonics Inc. under Air Force Research Laboratory (AFRL), United States Air Force Contract Number: FA8650-13-C-1589. NCSU Released by AFRL/RYMT for public distribution, <3/3/16>, v2. Additionally, LB and ME acknowledge the partial support of the National Science Foundation (CAREER award ECCS-0955127) in this work.

References

1. 

M. Eismann, Hyperspectral Remote Sensing, PM210 SPIE Press, Bellingham, Washington (2012). Google Scholar

2. 

J. M. Bioucas-Dias et al., “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag., 1 (2), 6 –36 (2013). http://dx.doi.org/10.1109/MGRS.2013.2244672 Google Scholar

3. 

G. A. Blackburn, “Hyperspectral remote sensing of plant pigments,” J. Exp. Bot., 58 855 –867 (2007). http://dx.doi.org/10.1093/jxb/erl123 JEBOA6 1460-2431 Google Scholar

4. 

N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng., 52 090901 (2013). http://dx.doi.org/10.1117/1.OE.52.9.090901 Google Scholar

5. 

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep., 616 1 –37 (2016). http://dx.doi.org/10.1016/j.physrep.2015.12.004 PRPLCM 0370-1573 Google Scholar

6. 

B. D. Maione et al., “Spatially heterodyned snapshot imaging spectrometer,” Appl. Opt., 55 8667 –8675 (2016). http://dx.doi.org/10.1364/AO.55.008667 APOPAI 0003-6935 Google Scholar

7. 

M. W. Kudenov et al., “Polarization spatial heterodyne interferometer: model and calibration,” Opt. Eng., 53 044104 (2014). http://dx.doi.org/10.1117/1.OE.53.4.044104 Google Scholar

8. 

S. S. Vogt et al., “HIRES: the high-resolution echelle spectrometer on the Keck 10-m telescope,” Proc. SPIE, 2198 362 –375 (1994). http://dx.doi.org/10.1117/12.176725 PSISDG 0277-786X Google Scholar

9. 

Y. Ji et al., “Analytical design and implementation of an imaging spectrometer,” Appl. Opt., 54 517 (2015). http://dx.doi.org/10.1364/AO.54.000517 APOPAI 0003-6935 Google Scholar

10. 

D. Malacara, Optical Shop Testing, John Wiley & Sons, Hoboken, New Jersey (2007). Google Scholar

11. 

J. Li, J. Zhu and X. Hou, “Field-compensated birefringent Fourier transform spectrometer,” Opt. Commun., 284 1127 –1131 (2011). http://dx.doi.org/10.1016/j.optcom.2010.11.029 OPCOB8 0030-4018 Google Scholar

12. 

M. Françon and S. Mallick, Polarization Interferometers: Applications in Microscopy and Macroscopy, Wiley-Interscience, Hoboken, New Jersey (1971). Google Scholar

13. 

C. Zhang et al., “A static polarization imaging spectrometer based on a Savart polariscope,” Opt. Commun., 203 21 –26 (2002). http://dx.doi.org/10.1016/S0030-4018(01)01726-6 OPCOB8 0030-4018 Google Scholar

14. 

C. Zhang et al., “Birefringent laterally sheared beam splitter-Savart polariscope,” Proc. SPIE, 6150 615001 (2006). http://dx.doi.org/10.1117/12.677951 PSISDG 0277-786X Google Scholar

15. 

J. Kim et al., “Fabrication of ideal geometric-phase holograms with arbitrary wavefronts,” Optica, 2 958 (2015). http://dx.doi.org/10.1364/OPTICA.2.000958 Google Scholar

16. 

J. Harlander, R. J. Reynolds and F. L. Roesler, “Spatial heterodyne spectroscopy for the exploration of diffuse interstellar emission lines at far-ultraviolet wavelengths,” Astrophys. J., 396 730 –740 (1992). http://dx.doi.org/10.1086/171756 Google Scholar

17. 

M. W. Kudenov et al., “Spatial heterodyne interferometry with polarization gratings,” Opt. Lett., 37 4413 (2012). http://dx.doi.org/10.1364/OL.37.004413 OPLEDP 0146-9592 Google Scholar

18. 

D. Luo and M. W. Kudenov, “Neural network calibration of a snapshot birefringent Fourier transform spectrometer with periodic phase errors,” Opt. Express, 24 11266 (2016). http://dx.doi.org/10.1364/OE.24.011266 OPEXFF 1094-4087 Google Scholar

19. 

M. Richartz, “An improvement of Savart’s polariscope,” J. Opt. Soc. Am., 38 623 (1948). http://dx.doi.org/10.1364/JOSA.38.000623 JOSAAH 0030-3941 Google Scholar

20. 

M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express, 20 17973 (2012). http://dx.doi.org/10.1364/OE.20.017973 OPEXFF 1094-4087 Google Scholar

21. 

N. F. Borrelli et al., “Imaging and radiometric properties of microlens arrays,” Appl. Opt., 30 3633 (1991). http://dx.doi.org/10.1364/AO.30.003633 APOPAI 0003-6935 Google Scholar

22. 

C. Oh et al., “High-throughput continuous beam steering using rotating polarization gratings,” IEEE Photonics Technol. Lett., 22 200 –202 (2010). http://dx.doi.org/10.1109/LPT.2009.2037155 IPTLEL 1041-1135 Google Scholar

23. 

G. Ghosh, “Dispersion-equation coefficients for the refractive index and birefringence of calcite and quartz crystals,” Opt. Commun., 163 95 –102 (1999). http://dx.doi.org/10.1016/S0030-4018(99)00091-7 OPCOB8 0030-4018 Google Scholar

24. 

M. A. Davenport et al., “The smashed filter for compressive classification and target recognition,” Proc. SPIE, 6498 64980H (2007). http://dx.doi.org/10.1117/12.714460 PSISDG 0277-786X Google Scholar

25. 

C.-I. ChangC.-I. Chang, Hyperspectral Data Processing: Algorithm Design and Analysis, Wiley-Interscience, Hoboken, New Jersey (2013). Google Scholar

26. 

J. S. Tyo, E. N. Pugh and N. Engheta, “Colorimetric representations for use with polarization-difference imaging of objects in scattering media,” J. Opt. Soc. Am. A, 15 367 (1998). http://dx.doi.org/10.1364/JOSAA.15.000367 JOAOD6 0740-3232 Google Scholar

27. 

S. Chaudhuri and K. Kotwal, Hyperspectral Image Fusion, Springer, New York (2013). Google Scholar

28. 

S. Falke et al., “Transition frequencies of the D lines of K39, K40, and K41 measured with a femtosecond laser frequency comb,” Phys. Rev. A, 74 32503 (2006). http://dx.doi.org/10.1103/PhysRevA.74.032503 Google Scholar

Biography

Bryan Maione completed his BS degree in electrical engineering at the University at Buffalo in 2013. Following his undergraduate studies, he moved to North Carolina to attend NCSU and work under Michael Kudenov, where he earned his PhD in electrical engineering. He now works for Aqueti in Durham, North Carolina, USA, developing array cameras. His primary research interests include hyperspectral imaging, computational imaging, and machine learning.

Leandra Brickson is a PhD student in electrical engineering with a focus in photonics and nanofabrication. After some initial work in nanofabrication, she worked at the GPL group under Dr. Michael Escuti at North Carolina State University (NCSU) fabricating and designing liquid crystal polarization gratings and apodization phase plates. Research interests include polymerization mechanisms, light transport modeling, and machine learning using deep learning. She is currently a PhD student at Stanford University.

Michael Escuti is currently a professor of electrical engineering at NCSU, Raleigh, North Carolina, where he pursues research topics in photonics, optoelectronics, diffractive optics, and liquid crystals. He has been recognized by the 2016 NCSU Innovator of the Year Award, the 2010 Presidential Early Career Award for Scientists and Engineers (NSF), and both the Glenn H. Brown Award (2004) from the International Liquid Crystal Society and the OSA/NewFocus StudentAward (2002) from the Optical Society of America.

Michael Kudenov received his BS degree in electrical and computer engineering from University of Alaska Fairbanks, Fairbanks, Alaska, in 2005 and his PhD degree in optical sciences from the University of Arizona, Tucson, Arizona, in 2009. He is an assistant professor of ECE at NCSU in Raleigh, North Carolina. His lab researches compact high-speed hyperspectral, polarimetric, and interferometric sensors and sensing systems within multidisciplinary applications spanning remote sensing, defense, process monitoring, and biological imaging.

© 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2017/$25.00 © 2017 SPIE
Bryan Maione, Leandra Brickson, Michael Escuti, and Michael Kudenov "Snapshot imaging spectrometry with a heterodyned Savart plate interferometer," Optical Engineering 56(8), 081806 (7 June 2017). https://doi.org/10.1117/1.OE.56.8.081806
Received: 5 December 2016; Accepted: 15 May 2017; Published: 7 June 2017
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Interferometers

Imaging spectrometry

Spectrometers

Sensors

Spectral resolution

Target recognition

Calibration


CHORUS Article. This article was made freely available starting 07 June 2018

Back to Top