The use of optical frequency combs (OFCs) for multi-heterodyne spectroscopy has enabled unprecedented measurement capabilities for spectroscopic sensing, including rapid acquisition speed, high resolution and high sensitivity1,2. Development of field deployable OFC sources that are widely tunable in the important chemical fingerprint region in the long-wavelength infrared (LWIR) is a major research challenge. In this paper, we report our recent efforts towards developing LWIR comb source for SILMARILS (Standoff ILluminator for Measuring Absorbance and Reflectance Infrared Light Signatures) program by IARPA. LGS has developed fiber optic sources producing spectral combs in the SWIR (1.52 to 1.56 μm and 1.7 to 2.0 μm) and in the LWIR (7.7 to 12.1 μm) regions. The spectral combs in the LWIR are generated by difference-frequency mixing one OFC centered around 1.54 μm with another OFC, whose center wavelength is tunable between 1.7 and 2.0 μm, in a nonlinear optical crystal. Average power of the generated LWIR is 1.2-12 mW and its instantaneous spectral breadth of the combs is > 80 cm-1, sufficiently broad to cover multiple molecular absorption peaks. We demonstrate standoff sensing of chemical targets having concentration as low as 12 μg/cm2 by measuring LWIR transflectance spectra using the comb source.
Recent years have numerous algorithms to learn a sparse synthesis or analysis model from data. Recently, a generalized analysis model called the 'transform model' has been proposed. Data following the transform model is approximately sparsified when acted on by a linear operator called a sparsifying transform. While existing transform learning algorithms can learn a transform for any vectorized data, they are most often used to learn a model for overlapping image patches. However, these approaches do not exploit the redundant nature of this data and scale poorly with the dimensionality of the data and size of patches. We propose a new sparsifying transform learning framework where the transform acts on entire images rather than on patches. We illustrate the connection between existing patch-based transform learning approaches and the theory of block transforms, then develop a new transform learning framework where the transforms have the structure of an undecimated filter bank with short filters. Unlike previous work on transform learning, the filter length can be chosen independently of the number of filter bank channels. We apply our framework to accelerating magnetic resonance imaging. We simultaneously learn a sparsifying filter bank while reconstructing an image from undersampled Fourier measurements. Numerical experiments show our new model yields higher quality images than previous patch based sparsifying transform approaches.
Model based iterative reconstruction algorithms are capable of reconstructing high-quality images from lowdose CT measurements. The performance of these algorithms is dependent on the ability of a signal model to characterize signals of interest. Recent work has shown the promise of signal models that are learned directly from data. We propose a new method for low-dose tomographic reconstruction by combining adaptive sparsifying transform regularization within a statistically weighted constrained optimization problem. The new formulation removes the need to tune a regularization parameter. We propose an algorithm to solve this optimization problem, based on the Alternating Direction Method of Multipliers and FISTA proximal gradient algorithm. Numerical experiments on the FORBILD head phantom illustrate the utility of the new formulation and show that adaptive sparsifying transform regularization outperforms competing dictionary learning methods at speeds rivaling total-variation regularization.