Temporal snapshot compressive imaging (SCI) allows high-dimensional temporal images to be reconstructed from a two-dimensional (2D) set of measurements. This is valuable for capturing color-polarized video data that can be used for robust material classification, considering that each material has a unique behavior in the way it polarizes the reflected light. In contrast to conventional commercial video cameras, which often compromise spatial, color, or polarization resolution to accommodate more information in the sensor, the SCI paradigm exploits optics, electronics and algorithms to produce high-resolution high-dimensional imaging from far fewer measurements. Commercial cameras can be adapted to capture additional information beyond their conventional sensing range when integrated with the SCI framework. This is achieved by incorporating an intensity modulation element that encodes and compress the data, and prevents incoherent sampling for nonlinear reconstruction based on compressive sensing principles. In this paper, an off-the-sheld camera is modified to compressively acquire and reconstruct high resolution spatio temporal polarization and color data from 2D measurements, leading to color-polarized video.The sensor camera uses a Bayer and a polarization filter superimposed on each other, aka RGB-P sensor. Then, a time-based designed coded aperture (CA) is incorporated into the optical path to temporally modulate each Bayer-polarized frame, where the modulated frames are integrated into a single measurement in the sensor. The CA is built with spatiotemporal block-unblock elements that encodes the information, in which its design restricts the distribution of those elements across the time dimension for reducing the temporal measurements redundancy, and, in turn, leading to better reconstructions. The compressed color-polarized video measurements are then recovered by using the alternating direction method of multipliers (ADMM) reconstruction algorithm. Numerical experiments show that temporal designed CA patterns outperforms random CA structures in terms of PSNR (Peak signal-to-noise ratio) and SSIM (structural similarity index measure), providing better-quality reconstructions of a color-polarized video from a dynamic scene.
Single-pixel imaging (SP) uses coded aperture (CA) elements to capture multiple spatially modulated versions of a scene in a single detector. In real SP implementations, spatial light modulator (SLM) devices are employed to generate the CA modulations. To obtain noiseless discrete images from the SP measurements, it is required to oversample the number of pixels in the objective image. Alternatively, by exploiting compressive sensing theory and designed CA patterns, the SLM-based SP system reduces the projections needed for quality reconstructions, where the CA patterns are established with the aim of emulating orthonormal bases that minimize the correlation between the shots. Nevertheless, in practice, the frame rate of the SP system is restricted by the SLM device, which limits real-time applications. To overcome this, the SLM-based patterns are replaced by etching CA structures over a circularly shifted mask (S-CA), which are introduced in the SP optical path for higher frame rate acquisition. Yet these S-CA patterns present correlated shift modulations and, in turn, yield inaccurate reconstructions. This work introduces an iterative strategy for designing the spatial S-CA pattern structure in SP imaging. The proposed method determines the spatial entries of the S-CA by minimizing the correlation between the pattern shifts, in which the quantity and step size of the S-CA displacements are restricted. Numerical simulations using the proposed design demonstrate an improvement in terms of peak signal-to-noise ratio (PSNR) of up to 2.5 dB, when compared to non-designed CA-S structures, in a compression ratio of 0.25.
Infrared (IR) imaging systems have sensor and optical limitations that result in degraded imagery. Apart from imperfect optics and the finite detector size being responsible for introducing blurring and aliasing, the detector fixed-pattern noise also adds a significant layer of degradation in the collected imagery. Here, we propose a single-shot super-resolution method that compensates for the nonuniformity noise of long-wave IR imaging systems. The strategy combines wavefront modulation and a reconstruction methodology based on total variation and nonlocal means regularizers to recover high-spatial frequencies while reducing noise. In simulations and experiments, we demonstrate a clear improvement of up to 16× in image resolution while significantly decreasing the fixed-pattern noise in the reconstructed images.
Snapshot compressive imaging aims to capture high resolution images using low resolution detectors. The challenge is the generation of simultaneous optical projections that fulfill the compressed sensing reconstruction requirements. We propose the use of controlled aberrations through wavefront coding to produce point spread functions that can simultaneously code and multiplex the scene in a variety of ways. Apart from light efficiency, we can analytically characterize the system matrix response. We explore combinations of Zernike modes and analyze the corresponding coherence parameter. Simulation results using natively sparse and natural scenes demonstrate the feasibility of using controlled aberrations for compressive imaging.
We present a striping noise compensation architecture for hyperspectral push-broom cameras, implemented on a Field-Programmable Gate Array (FPGA). The circuit is fast, compact, low power, and is capable of eliminating the striping noise in-line during the image acquisition process. The architecture implements a multi dimensional neural network (MDNN) algorithm for striping noise compensation previously reported by our group. The algorithm relies on the assumption that the amount of light impinging at the neighboring photo-detectors is approximately the same in the spatial and spectral dimensions. Under this assumption, two striping noise parameters are estimated using spatial and spectral information from the raw data. We implemented the circuit on a Xilinx ZYNQ XC7Z2010 FPGA and tested it with images obtained from a NIR N17E push-broom camera, with a frame rate of 25fps and a band-pixel rate of 1.888 MHz. The setup consists of a loop of 320 samples of 320 spatial lines and 236 spectral bands between 900 and 1700 nanometers, in laboratory condition, captured with a rigid push-broom controller. The noise compensation core can run at more than 100 MHZ and consumes less than 30mW of dynamic power, using less than 10% of the logic resources available on the chip. It also uses one of two ARM processors available on the FPGA for data acquisition and communication purposes.
A non-contact infrared imaging-based measurement technique is applied to quantify the enzymatic reaction of glucokinase. The method is implemented by a long-wave (8-12 [μm]) infrared microbolometer imaging array and a germanium-based infrared optical vision system adjusted to the size of a small biological sample. The enzymatic reaction is carried out by the glucokinase enzyme, which is representative of the internal dynamics of the cell. Such reactions produce a spontaneous exothermal release of energy detected by the infrared imaging system as a non-contact measurement technique. It is shown by stoichiometry computations and infrared thermal resolution metrics that the infrared imaging system can detect the energy release at the [mK] range. This allows to quantify the spontaneity of the enzymatic reaction in a three dimensional (surface and time) single and noncontact real- time measurement. The camera is characterized for disclosing its sensibility, and the fixed pattern noise is compensated by a two point calibration method. On the other hand, the glucokinase enzyme is isolated from Pyrococcus furiosus. Therefore, the experiment is carried out by manual injection with graduated micropipettes using 40 [μl] of glucokinase at the surface of the substrate contained in an eppendorf tube. For recording, the infrared camera is adjusted in-focus at 25.4 [mm] from the superficial level of the substrate. The obtained values of energy release are 139 ± 22 [mK] at room temperature and 274 ± 22 [mK] for a bath temperature of 334 [K].
In this paper, a prior knowledge model is proposed in order to increase the effectiveness of a multidimensional striping noise compensation (SNC) algorithm. This is accomplished by considering an optoelectronic approach, thereby generating a more accurate mathematical representation of the hyperspectral acquisition process. The proposed model includes knowledge on the system spectral response, which can be obtained by means of an input with known spectral radiation. Further, the model also considers the dependence of the noise structure on the analog-digital conversion process, that is, schemes such as active-pixel sensor (APS) and passive-pixel sensor (PPS) have been considered. Finally, the model takes advantage of the degree of crosstalk between consecutive bands in order to determinate how much of this spectral information is contributing to the read out data obtained in a particular band. All prior knowledge is obtained by a series of experimental analysis, and then integrated into the model. After estimating the required parameters, the applicability of the multidimensional SNC is illustrated by compensating for stripping noise in hyperspectral images acquired using an experimental setup. A laboratory prototype, based on both a Photonfocus Hurricane hyperspectral camera and a Xeva Xenics NIR hyperspectral camera, has been implemented to acquire data in the range of 400-1000 [nm] and 900-1700 [nm], respectively. Also, a mobile platform has been used to simulate and synchronize the scanning procedure of the cameras and an uniform tungsten lamp has been installed to ensure an equal spectral radiance between the different bands for calibration purpose.
KEYWORDS: Cameras, Staring arrays, Near infrared, Thermal modeling, Imaging systems, Temperature metrology, Cooling systems, Data modeling, Systems modeling, Hyperspectral imaging
Our group has developed a Planck physics-based model for the input/output behavior of near infrared (NIR)
hyperspectral cameras. During the validation of the model, experiments conducted using an NIR hyperspectral
camera have shown that, when thermal radiation is used as the camera input and no illumination is present,
the output offset happens to be thermally dependent, yet independent of the wavelengths in the NIR band. In
this work, the effect of the incident temperature on the amount of output offset in NIR hyperspectral cameras
has been experimentally studied and introduced in our previous model for such cameras. The experimental
study has been conducted using an NIR hyperspectral camera in the range of 900 to 1700 [nm] and a controlled
illumination set-up, while different input temperatures have been controlled by means of black-body radiator
sources. The thermal-dependent offset is modeled phenomenologically from experimental data. Initial results
have shown a non-linear dependence between the offset and the temperature. This thermal-offset dependence
can be used to generate new NIR hyperspectral models, new non-linear calibration procedures, and establish a
basis for the study of time dependent variations of the NIR thermal-offset.
The accuracy achieved by applications employing hyperspectral data collected by hyperspectral cameras depends
heavily on a proper estimation of the true spectral signal. Beyond question, a proper knowledge about the sensor
response is key in this process. It is argued here that the common first order representation for hyperspectral
NIR sensors does not represent accurately their thermal wavelength-dependent response, hence calling for more
sophisticated and precise models. In this work, a wavelength-dependent, nonlinear model for a near infrared
(NIR) hyperspectral camera is proposed based on its experimental characterization. Experiments have shown
that when temperature is used as the input signal, the camera response is almost linear at low wavelengths,
while as the wavelength increases the response becomes exponential. This wavelength-dependent behavior is
attributed to the nonlinear responsivity of the sensors in the NIR spectrum. As a result, the proposed model
considers different nonlinear input/output responses, at different wavelengths. To complete the representation,
both the nonuniform response of neighboring detectors in the camera and the time varying behavior of the input
temperature have also been modeled. The experimental characterization and the proposed model assessment
have been conducted using a NIR hyperspectral camera in the range of 900 to 1700 [nm] and a black body
radiator source. The proposed model was utilized to successfully compensate for both: (i) the nonuniformity
noise inherent to the NIR camera, and (ii) the stripping noise induced by the nonuniformity and the scanning
process of the camera while rendering hyperspectral images.
Algorithms for striping noise compensation (SNC) for push-broom hyperspectral cameras (PBHCs) are primarily
based on image processing techniques. These algorithms rely on the spatial and temporal information available
at the readout data; however, they disregard the large amount of spectral information also available at the data.
In this paper such flaw has been tackled and a multidimensional approach for SNC is proposed. The main
assumption of the proposed approach is the short-term stationary behavior of the spatial, spectral, and temporal
input information. This assumption is justified after analyzing the optoelectronic sampling mechanism carried
out by PBHCs. Namely, when the wavelength-resolution of hyperspectral cameras is high enough with respect
to the target application, the spectral information at neighboring photodetectors in adjacent spectral bands can
be regarded as a stationary input. Moreover, when the temporal scanning of hyperspectral information is fast
enough, consecutive temporal and spectral data samples can also be regarded as a stationary input at a single
photodetector. The strength and applicability of the multidimensional approach presented here is illustrated by
compensating for stripping noise real hyperspectral images. To this end, a laboratory prototype, based on a
Photonfocus Hurricane hyperspectral camera, has been implemented to acquire data in the range of 400-1000
[nm], at a wavelength resolution of 1.04 [nm]. A mobile platform has been also constructed to simulate and
synchronize the scanning procedure of the camera. Finally, an image-processing-based SNC algorithm has been
extended yielding an approach that employs all the multidimensional information collected by the camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.