Open Access
18 June 2021 Single-shot spectral-volumetric compressed ultrafast photography
Author Affiliations +
Abstract

In ultrafast optical imaging, it is critical to obtain the spatial structure, temporal evolution, and spectral composition of the object with snapshots in order to better observe and understand unrepeatable or irreversible dynamic scenes. However, so far, there are no ultrafast optical imaging techniques that can simultaneously capture the spatial–temporal–spectral five-dimensional (5D) information of dynamic scenes. To break the limitation of the existing techniques in imaging dimensions, we develop a spectral-volumetric compressed ultrafast photography (SV-CUP) technique. In our SV-CUP, the spatial resolutions in the x, y and z directions are, respectively, 0.39, 0.35, and 3 mm with an 8.8  mm  ×  6.3  mm field of view, the temporal frame interval is 2 ps, and the spectral frame interval is 1.72 nm. To demonstrate the excellent performance of our SV-CUP in spatial–temporal–spectral 5D imaging, we successfully measure the spectrally resolved photoluminescent dynamics of a 3D mannequin coated with CdSe quantum dots. Our SV-CUP brings unprecedented detection capabilities to dynamic scenes, which has important application prospects in fundamental research and applied science.

1.

Introduction

Acquiring the spatial (x,y,z), temporal (t), and spectral (λ) information of an object is very important in natural science exploration. Multi-dimensional optical imaging, as a visualization method, can provide information covering the space, time, and spectrum.1 So far, multi-dimensional optical imaging has played an irreplaceable role in exploring the unknown world and decrypting natural mysteries such as light–matter interactions,2 light scattering in tissues,3 and physical or biochemical reactions.46 Scanning multi-dimensional optical imaging had to be sequentially operated, and thus its imaging speed was restricted to hundreds of frames per second (fps) due to the limited data readout speed and on-chip storage of charge-coupled devices or complementary metal-oxide semiconductors (CMOSs).7 Therefore, snapshot multi-dimensional optical imaging has aroused great interest among researchers because of its ability to capture dynamic scenes with imaging speeds of up to a billion or a trillion fps, corresponding to the temporal frame intervals at the picosecond or femtosecond scales. To capture as much spatial–temporal–spectral (x,y,z,t,λ) information as possible, various multi-dimensional optical imaging techniques have been developed. For example, the spectral imaging techniques, including coded aperture snapshot spectral imaging,8 adaptive optics spectral-domain optical coherence tomography,9 volume holographic spatial–spectral imaging,10 and compressive spectral time-of-flight (ToF) imaging,11 could capture the spatial–spectral four-dimensional (4D) (x,y,z,λ) information, but there was no temporal information. However, the ultrafast imaging techniques, such as compressed ultrafast photography (CUP),1215 sequentially timed all-optical mapping photography,16 and single-shot femtosecond time-resolved optical polarimetry,17 could record the spatial–temporal three-dimensional (3D) (x,y,t) information, while both the depth (i.e., z) and spectral information were missing. Some improved techniques have been developed to further extend the imaging dimensions of CUP, such as hyperspectrally compressed ultrafast photography (HCUP)18 and compressed ultrafast spectral photography,19 which could capture the spatial–temporal–spectral (4D) (x,y,t,λ) information, but they still lacked the depth information. Recently, a stereo-polarimetric compressed ultrafast photography method was able to detect spatial–temporal-polarization five-dimensional (5D) (x,y,z,t,ψ) information.20 Unfortunately, the spectral information could not be detected. Consequently, there are no imaging optical techniques that can capture the whole spatial–temporal–spectral 5D (x,y,z,t,λ) information in a single exposure, until now.

To break the detection limitation of the existing snapshot multi-dimensional optical imaging in the whole spatial, temporal, and spectral dimensions, we develop a spectral-volumetric compressed ultrafast photography (SV-CUP) technique to realize the spatial–temporal–spectral 5D (x,y,z,t,λ) imaging of the dynamic scenes. Here SV-CUP combines our previous HCUP and ToF-CUP.21 HCUP captures the spatial–temporal–spectral 4D (x,y,t,λ) information of the dynamic scenes, and ToF-CUP extracts the spatial 3D (x,y,z) information of the dynamic scenes. The 3D (x,y,z) information in ToF-CUP is coupled to the 4D (x,y,t,λ) information in HCUP and forms 5D (x,y,z,t,λ) information by image processing. Using SV-CUP, we experimentally demonstrate the spectrally resolved photoluminescent dynamics of a 3D mannequin coated with CdSe quantum dots, which confirms the reliability of SV-CUP.

2.

SV-CUP’s Configuration and Principle

A schematic diagram of SV-CUP is shown in Fig. 1(a). A laser pulse (400-nm central wavelength, 50-fs pulse duration) transmits through an engineered diffuser (Thorlabs, ED1-S20-MD) and then irradiates on a 3D object. The laser pulse excites the matter on the surface of the 3D object, and the laser-induced optical signal (such as fluorescence) is collected by a camera lens (Nikon, AF Nikkor 35 mm), together with the backscattered optical signal of the laser pulse from the surface of the 3D object. Here the laser-induced optical signal is used to study the dynamic behavior of the laser–matter interaction, and the backscattered optical signal is used to obtain the spatial structure of the 3D object. Both optical signals are divided into two components by a beam splitter (BS1). One is reflected to an external CMOS camera (Andor, ZYLA 4.2), and the other is imaged onto a digital micromirror device (DMD, Texas Instruments, DLP Light Crafter 3000) through a 4f imaging system. The two optical signals are encoded with a static pseudorandom binary pattern on the DMD and then retroreflected through the same 4f imaging system. The two encoded optical signals are divided into two components by another beam splitter (BS2) again, one goes into an HCUP subsystem,18 and the other enters a ToF-CUP subsystem.21 In ToF-CUP, the laser-induced optical signal is filtered by a bandpass filter, and only the backscattered optical signal is sent into a streak camera SC1 (XIOPM, 5200). In HCUP, the backscattered optical signal is filtered, and the laser-induced optical signal is sent to a grating (Thorlabs, GT25-03) for horizontal deflection and then to another streak camera SC2 (Hamamatsu, C7700) for vertical deflection and integral imaging. The external CMOS camera and the two streak cameras are precisely synchronized by a digital delay generator (Stanford Research Systems, DG645). In this experiment, the unencoded and undeflected image measured by the external CMOS camera is used to provide the spatial and intensity threshold constraint in the subsequent image reconstruction.22

Fig. 1

SV-CUP’s configuration and principle. (a) System configuration of SV-CUP: M1 and M2, mirrors; ED, engineered diffuser; DS, dynamic scene; CL, camera lens; BS1 and BS2, beam splitters (reflection/transmission: 50/50); F1 and F2, filters; G, diffraction grating; L1 and L2, lenses; DMD, digital micromirror device; CMOS, complementary metal-oxide semiconductor camera; and SC1 and SC2, streak cameras. (b) Working principle of SV-CUP: C, spatial encoding operator; T, temporal shearing operator; K, spatial-temporal integration operator; S, spectral shearing operator; and M, spatial–temporal–spectral integration operator.

AP_3_4_045001_f001.png

Mathematically, the SV-CUP system contains two imaging subsystems, i.e., HCUP and ToF-CUP. As can be seen in Fig. 1(b), the original 5D dynamic scene I(x,y,z,t,λ), involving spatial 3D, temporal 1D, and spectral 1D information, is first encoded and then divided into two components for imaging: ToF-CUP is used to capture the spatial 3D (x,y,z) information21 and HCUP is used to record the spatial–temporal–spectral 4D (x,y,t,λ) information.18

In ToF-CUP, the backscattered optical signal is sheared in the temporal domain. According to the time of received photons tToF and the intensity reflectivity α(x,y,z) of the 3D object, the backscattered optical signal I1(x,y,z) can be described as

Eq. (1)

I1(x,y,z)=Isα(x,y,z),
where Is is the intensity of the illuminated optical signal, z is the depth with z=ctToF/2; here c is the speed of light. Thus the compressed image measured by ToF-CUP can be formulated as

Eq. (2)

E1(m,n)=KTCIsα(x,y,z),
where C, T, and K represent, respectively, the spatial encoding operator, temporal shearing operator, and spatial–temporal integration operator, and E1(m,n) denotes the measured intensity on a two-dimensional (2D) array sensor.

To recover the spatial 3D information, i.e., I^1(x,y,z), an augmented Lagrangian (AL) algorithm based on compressed sensing is employed to solve the minimization problem,18 and it is given by

Eq. (3)

I^1(x,y,z)=argmin{ΦTV[Isα(x,y,z)]γ[E1(m,n)KTCIsα(x,y,z)]+ξ2E1(m,n)KTCIsα(x,y,z)22},
where ΦTV(·) is the total-variation regularizer, γ is the Lagrange multiplier vector, ξ is the penalty parameter, and ·2 denotes the l2 norm. Note that all the operators in Eq. (3) are linear and derivational.

In HCUP, the laser-induced optical signal (such as fluorescence) is sheared in both the temporal and spectral domains. Similarly, the compressed image recorded by HCUP can be written as

Eq. (4)

E2(m,n)=MTSCI2(x,y,t,λ),
where S is the spectral shearing operator and M is the spatial–temporal–spectral integration operator.

To retrieve the spatial–temporal–spectral 4D information, i.e., I^2(x,y,t,λ), the AL algorithm is also used to solve the inverse problem of Eq. (4) and it is given by

Eq. (5)

I^2(x,y,t,λ)=argmin{ΦTV[I2(x,y,t,λ)]γ[E2(m,n)MTSCI2(x,y,t,λ)]+ξ2E2(m,n)MTSCI2(x,y,t,λ)22}.

Based on Eqs. (3) and (5), I^(x,y,z) in ToF-CUP and I^2(x,y,t,λ) in HCUP can be individually reconstructed; here the penalty parameters ξ are 0.25 and 0.001 in Eqs. (3) and (5), respectively. By coupling I^1(x,y,z) to I^2(x,y,t,λ), the spatial–temporal–spectral 5D information, i.e., I^(x,y,z,t,λ), can be extracted by image processing. According to the time relation between ToF-CUP and HCUP, the coupling operation can be expressed as

Eq. (6)

I^(x,y,z,t,λ)=H[I^1(x,y,z)]I^2(x,y,t,λ),s.t.  z=ct/2,
where H(x) is a threshold filter with H(x)=0 for x<xs and H(x)=1 for xxs, xs denotes the intensity threshold that ensures the noises being eliminated, and represents the Hadamard product of 2D (i.e., x, y) matrices. In the coupling process, I^1(x,y,z) is filtered by a threshold filter, and it contains the spatial slices in the depth z, which only offers the spatial outline of the 3D object. Based on Eq. (6), the sequential depth information of I^2(x,y,t,λ) in HCUP can be obtained by the Hadamard product of I^2(x,y,t,λ) and limited I^1(x,y,z). Thus the total spatial–temporal–spectral I^(x,y,z,t,λ) information is fully retrieved.

3.

SV-CUP’s Depth Resolution Characterization

SV-CUP is composed of ToF-CUP and HCUP, thus the technical index of SV-CUP is determined by ToF-CUP and HCUP. Only HCUP provides the temporal and spectral information, and therefore determines the temporal and spectral frame intervals of SV-CUP. The spatial resolutions in the x and y directions are related to both ToF-CUP and HCUP, but HCUP has the lower spatial resolutions due to a higher data compressed ratio compared to ToF-CUP, thus the spatial resolutions in the x and y directions of SV-CUP are also decided by HCUP. However, the spatial resolution in the z direction is only related to ToF-CUP, and thus ToF-CUP determines the spatial resolution in the z direction of SV-CUP. In our previous work, the related technical parameters of HCUP have been characterized.18 The spatial resolutions in the x and y directions are, respectively, 0.39 and 0.35 mm with an 8.8  mm×6.3  mm field of view (FOV), corresponding to 1.26 and 1.41 line pairs per millimeter (lp/mm) in our previous report. The temporal frame interval is 2 ps, and the spectral frame interval is 1.72 nm. However, the spatial resolution in the z direction (i.e., depth resolution) of ToF-CUP needs to be characterized here.

The experimental arrangement for characterizing the depth resolution of SV-CUP is shown in Fig. 2(a). A ladder-structured model is used as the measured object, and an ultrashort laser pulse irradiates this ladder-structured model. The backscattered optical signal from these ladders at different heights is collected by SV-CUP. The size of the ladder-structured model is shown in Fig. 2(b). Based on these sizes, the temporal intervals on these ladders can be calculated as, respectively, 10, 20, 30, and 40 ps, and the total time window is 100 ps. Three representative reconstructed images are shown in Fig. 2(c). As can be seen, the first two ladders are simultaneously observed at the time of 8 ps, which are indistinguishable. However, at the time of 32 ps, the first two ladders completely disappear, and only the third ladder is observed. Similarly, only the fifth ladder appears at the time of 104 ps. By these experimental observations, the height difference of 3 mm between the second and third ladders can be determined as the depth resolution of SV-CUP. The reconstructed 3D ladder-structured model is given in Fig. 2(d), which is consistent with the actual object in the size. Here the zero position in the z direction refers to the green dashed line in Fig. 2(b), and the height difference of the first two ladders cannot be perfectly retrieved, which is due to the limitation of depth resolution.

Fig. 2

SV-CUP’s depth resolution characterization: (a) schematic diagram of the experimental setup; (b) the actual size of the ladder-structured model along the x and z axes, 25 mm along the y axis; (c) the selected reconstructed images at the times of 8, 32, and 104 ps; and (d) the retrieved 3D (x,y,z) data cube from Fig. 2(c).

AP_3_4_045001_f002.png

4.

SV-CUP’s 5D Imaging

To demonstrate the excellent performance of SV-CUP in spatial–temporal–spectral 5D imaging, we used SV-CUP to measure the photoluminescent dynamics of a 3D mannequin coated with CdSe quantum dots, and the experimental arrangement is shown in Fig. 3(a). A strong optical absorbance of CdSe is at the wavelength of 400 nm,23,24 which corresponds to the laser central wavelength. The reconstructed data cube of the 3D mannequin is shown in Fig. 3(b). One can see that the reconstructed mannequin is the same as the real mannequin in the spatial distribution. Figure 3(c) shows the reconstructed images of the 3D mannequin at some representative times and wavelengths. Obviously, the fluorescence intensity evolutions in both the temporal and spectral dimensions can be clearly observed. In the spectral dimension, the central wavelength of the fluorescence spectrum is 532 nm, and the whole spectral range is about 64 nm. In the temporal dimension, the right hand, body, and left hand of the 3D mannequin appear in turn due to the difference in the spatial depth, and the whole mannequin is observed at the time of 480 ps. All the fluorescence intensities at these different wavelengths almost reach the maximal values after excitation for about 8 ns, and the duration of the whole photoluminescent process is about 50 ns. To verify the reconstruction accuracy in the temporal and spectral dimensions, we calculate the fluorescence intensities in the temporal and spectral domains from Fig. 3(c) and compare them with the experimental results by other measurement methods. Here the fluorescence intensity in the temporal domain (i.e., photoluminescent dynamics) is measured by a streak camera, and the fluorescence intensity in the spectral domain (i.e., fluorescence spectrum) is measured by a spectrometer. Both the calculated and experimental results are shown in Figs. 3(d) and 3(e) for comparison. Obviously, the reconstruction results are in good agreement with the experimental measurements. Additionally, we also extract the time-resolved fluorescence spectroscopy from Fig. 3(c), and the calculated result is shown in Fig. 3(f). All the fluorescence spectral components have the same temporal evolution behavior, which shows a fast increase and then a slow decrease in intensity. For intuitive observation, we calculate the fluorescence lifetimes at some selected spectral components from Fig. 3(f), as shown in Fig. 3(g). It can be seen that these fluorescence spectral components have similar lifetimes, which well illustrates that all the fluorescence spectral components come from the relaxation of the same excited states in CdSe quantum dots.

Fig. 3

SV-CUP’s 5D imaging: (a) experimental arrangement for imaging the photoluminescent dynamics of a 3D mannequin coated with CdSe quantum dots; (b) reconstructed data cube of the 3D mannequin; (c) selected reconstructed images of the 3D mannequin at some representative times and wavelengths; (d) photoluminescent dynamics calculated from (c) (blue line) and measured by a streak camera (red line); (e) fluorescence spectrum calculated from (c) (blue line) and measured by a spectrometer (red line); (f) time-resolved spectroscopy extracted from (c); and (g) calculated fluorescence lifetimes at some selected spectral components (Video 1, MP4, 1.3 MB [URL: https://doi.org/10.1117/1.AP.3.4.045001.1).

AP_3_4_045001_f003.png

As shown in Fig. 3, SV-CUP demonstrates a powerful capability in detecting the fluorescence lifetime. Therefore, an important application for SV-CUP is fluorescence lifetime imaging (FLI).25,26 Different from traditional FLI that can only display the spatial plane 2D (x,y) information, SV-CUP can further provide the depth (i.e., z) information,27 which may contribute to the higher discrimination for the different materials on the 3D object. Similarly, considering the 5D imaging capability, SV-CUP is very suitable for biomedical imaging.2830 It will provide more information about the chemical composition and function evolution of biological tissues. Moreover, SV-CUP employs the computational imaging method (i.e., AL algorithm) to recover the original information. In this way, the image encoding and decoding in SV-CUP can provide computational security in the transmission process of image information. Therefore, SV-CUP shows an important application prospect in information and communication security.31

5.

Discussion and Conclusions

In SV-CUP, the spatial resolutions in the x and y directions depend on the camera lens in the imaging system. Once the high numerical aperture (NA) objective lens is used, both the horizontal spatial resolutions can be further improved, but they cannot break through the optical diffraction limit. The spatial resolution in the z direction is limited by the temporal resolution of the streak camera. If the cutting-edge femtosecond streak camera (Hamamatsu, C6138) is employed, the vertical spatial resolution can reach the submillimeter scale. The temporal frame interval is also limited by the temporal resolution of the streak camera. Similarly, the femtosecond streak camera is utilized, and the temporal frame interval with a few hundred femtoseconds can be experimentally achieved. The spectral frame interval is determined by the grating groove. The more grating grooves there are, the smaller the spectral frame interval is. Usually, the spectral frame interval with several hundred wavenumbers is available according to our experimental experience. In future studies, SV-CUP’s system parameters in the spatial, temporal, and spectral dimensions can be greatly improved by optimizing the streak camera, grating, and camera lens.

As shown above, SV-CUP provides a well-established tool to capture the spatial–temporal–spectral 5D (x,y,z,t,λ) information of the dynamic scenes. However, SV-CUP has a technical limitation in practical applications that cannot measure the rapid change of the 3D spatial structure of the dynamic scenes because the 3D spatial information of the object is obtained by coupling ToF-CUP and HCUP, and ToF-CUP can only give the fixed spatial structure of the 3D object. One solution is to employ the multiple exposure strategy, but the temporal resolution is limited by the refresh rate of the streak camera, which is usually on the order of a submillisecond. The other solution is to integrate a standard stereoscope20,32 into HCUP; this configuration only needs one set of HCUP system and one streak camera, but there is lower spatial resolution in the depth direction, because HCUP has a higher data compression ratio than CUP, and the streak camera needs to be divided into two imaging regions to detect.

To summarize, we have developed an SV-CUP technique that can simultaneously capture the spatial–temporal–spatial 5D information of the dynamic scenes in a single exposure. This technique empowers the snapshot optical imaging from four to five dimensions. In our SV-CUP, the spatial resolution is 0.39 mm in the x direction, 0.35 mm in the y direction, and 3 mm in the z direction with an 8.8  mm×6.3  mm FOV. The spectral frame interval is 1.72 nm. The temporal interval frame is 2 ps. Using our SV-CUP, we have successfully captured the spectrally resolved photoluminescent dynamics of a 3D mannequin coated with CdSe quantum dots. Given the 5D imaging capability of SV-CUP, it will exert a significant impact in many related applications.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (Grant Nos. 91850202, 11774094, 12074121, 11804097, 11727810, and 12034008), the Science and Technology Commission of Shanghai Municipality (Grant Nos. 19560710300 and 20ZR1417100), and Ministère des Relations internationales et de la Francophonie du Québec.

References

1. 

J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica, 5 (9), 1113 –1127 (2018). https://doi.org/10.1364/OPTICA.5.001113 Google Scholar

2. 

V. Anand et al., “Spatio-spectral-temporal imaging of fast transient phenomena using a random array of pinholes,” Adv. Photon. Res., 2 (2), 2000032 (2020). https://doi.org/10.1002/adpr.202000032 Google Scholar

3. 

Z. A. Steelman et al., “Light scattering methods for tissue diagnosis,” Optica, 6 (4), 479 –489 (2019). https://doi.org/10.1364/OPTICA.6.000479 Google Scholar

4. 

J. C. Jing, X. Wei and L. V. Wang, “Spatio-temporal-spectral imaging of non-repeatable dissipative soliton dynamics,” Nat. Commun., 11 (1), 2059 (2020). https://doi.org/10.1038/s41467-020-15900-x NCAOBW 2041-1723 Google Scholar

5. 

J. Yi et al., “Visible-light optical coherence tomography for retinal oximetry,” Opt. Lett., 38 (11), 1796 –1798 (2013). https://doi.org/10.1364/OL.38.001796 OPLEDP 0146-9592 Google Scholar

6. 

Y. Zhao et al., “Evaluation of burn severity in vivo in a mouse model using spectroscopic optical coherence tomography,” Biomed. Opt. Express, 6 (9), 3339 –3345 (2015). https://doi.org/10.1364/BOE.6.003339 BOEICL 2156-7085 Google Scholar

7. 

M. El-Desouki et al., “CMOS image sensors for high speed applications,” Sensors, 9 (1), 430 –444 (2009). https://doi.org/10.3390/s90100430 SNSRES 0746-9462 Google Scholar

8. 

A. A. Wagadarikar et al., “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express, 17 (8), 6368 –6388 (2009). https://doi.org/10.1364/OE.17.006368 OPEXFF 1094-4087 Google Scholar

9. 

Y. Zhang et al., “High-speed volumetric imaging of cone photoreceptors with adaptive optics spectral-domain optical coherence tomography,” Opt. Express, 14 (10), 4380 –4394 (2006). https://doi.org/10.1364/OE.14.004380 OPEXFF 1094-4087 Google Scholar

10. 

S. Vyas, Y. H. Chia and Y. Luo, “Volume holographic spatial-spectral imaging systems [Invited],” J. Opt. Soc. Am. A, 36 (2), A47 –A58 (2019). https://doi.org/10.1364/JOSAA.36.000A47 JOAOD6 0740-3232 Google Scholar

11. 

H. Rueda et al., “Single aperture spectral + ToF compressive camera: toward hyperspectral + depth imagery,” IEEE J. Sel. Top. Signal Process., 11 (7), 992 –1003 (2017). https://doi.org/10.1109/JSTSP.2017.2737784 Google Scholar

12. 

L. Gao et al., “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature, 516 (7529), 74 –77 (2014). https://doi.org/10.1038/nature14005 Google Scholar

13. 

J. Liang, L. Zhu and L. V. Wang, “Single-shot real-time femtosecond imaging of temporal focusing,” Light Sci. Appl., 7 (1), 42 (2018). https://doi.org/10.1038/s41377-018-0044-7 Google Scholar

14. 

D. Qi et al., “Single-shot compressed ultrafast photography: a review,” Adv. Photonics, 2 (1), 014003 (2020). https://doi.org/10.1117/1.AP.2.1.014003 AOPAC7 1943-8206 Google Scholar

15. 

J. Yao et al., “Multichannel-coupled compressed ultrafast photography,” J. Opt., 22 (8), 085701 (2020). https://doi.org/10.1088/2040-8986/aba13b Google Scholar

16. 

K. Nakagawa et al., “Sequentially timed all-optical mapping photography (STAMP),” Nat. Photonics, 8 (9), 695 –700 (2014). https://doi.org/10.1038/nphoton.2014.163 NPAHBY 1749-4885 Google Scholar

17. 

X. Wang et al., “High-frame-rate observation of single femtosecond laser pulse propagation in fused silica using an echelon and optical polarigraphy technique,” Appl. Opt., 53 (36), 8395 –8399 (2014). https://doi.org/10.1364/AO.53.008395 APOPAI 0003-6935 Google Scholar

18. 

C. Yang et al., “Hyperspectrally compressed ultrafast photography,” Phys. Rev. Lett., 124 (2), 023902 (2020). https://doi.org/10.1103/PhysRevLett.124.023902 PRLTAO 0031-9007 Google Scholar

19. 

P. Wang, J. Liang and L. V. Wang, “Single-shot ultrafast imaging attaining 70 trillion frames per second,” Nat. Commun., 11 (1), 2091 (2020). https://doi.org/10.1038/s41467-020-15745-4 NCAOBW 2041-1723 Google Scholar

20. 

J. Liang et al., “Single-shot stereo-polarimetric compressed ultrafast photography for light-speed observation of high-dimensional optical transients with picosecond resolution,” Nat. Commun., 11 (1), 5252 (2020). https://doi.org/10.1038/s41467-020-19065-5 NCAOBW 2041-1723 Google Scholar

21. 

J. Liang et al., “Encrypted three-dimensional dynamic imaging using snapshot time-of-flight compressed ultrafast photography,” Sci. Rep., 5 (1), 15504 (2015). https://doi.org/10.1038/srep15504 SRCEC3 2045-2322 Google Scholar

22. 

L. Zhu et al., “Space- and intensity-constrained reconstruction for compressed ultrafast photography,” Optica, 3 (7), 694 –697 (2016). https://doi.org/10.1364/OPTICA.3.000694 Google Scholar

23. 

Z. Pan et al., “Highly efficient inverted type-I CdS/CdSe core/shell structure QD-sensitized solar cells,” ACS Nano, 6 (5), 3982 –3991 (2012). https://doi.org/10.1021/nn300278z ANCAC3 1936-0851 Google Scholar

24. 

K. Gong et al., “Radiative lifetimes of zincblende CdSe/CdS quantum dots,” J. Phys. Chem. C, 119 (4), 2231 –2238 (2015). https://doi.org/10.1021/jp5118932 JPCCCK 1932-7447 Google Scholar

25. 

C. Dysli, S. Wolf and M. S. Zinkernagel, “The lowdown on fluorescence lifetime imaging ophthalmoscopy,” Retin. Today, 6 58 –60 (2017). Google Scholar

26. 

J. L. Lagarto et al., “Real-time multispectral fluorescence lifetime imaging using single photon avalanche diode arrays,” Sci. Rep., 10 (1), 8116 (2020). https://doi.org/10.1038/s41598-020-65218-3 SRCEC3 2045-2322 Google Scholar

27. 

A. Bhandari, C. Barsi and R. Raskar, “Blind and reference-free fluorescence lifetime estimation via consumer time-of-flight sensors,” Optica, 2 (11), 965 (2015). https://doi.org/10.1364/OPTICA.2.000965 Google Scholar

28. 

O. Holub et al., “Fluorescence lifetime imaging (FLI) in real-time: a new technique in photosynthesis research,” Photosynthetica, 38 (4), 581 –599 (2000). https://doi.org/10.1023/A:1012465508465 PHSYB5 0300-3604 Google Scholar

29. 

B. Kanber et al., “High-dimensional detection of imaging response to treatment in multiple sclerosis,” NPJ Digit. Med., 2 (1), 49 (2019). https://doi.org/10.1038/s41746-019-0127-8 Google Scholar

30. 

L. Marcu, “Fluorescence lifetime techniques in medical applications,” Ann. Biomed. Eng., 40 (2), 304 –331 (2012). https://doi.org/10.1007/s10439-011-0495-y ABMECF 0090-6964 Google Scholar

31. 

C. Yang et al., “Compressed 3D image information and communication security,” Adv. Quantum Technol., 1 (2), 1800034 (2018). https://doi.org/10.1002/qute.201800034 Google Scholar

32. 

M. Gosta, M. Grgić, “Accomplishments and challenges of computer stereo vision,” in Proc. ELMAR, 57 –64 (2010). Google Scholar
CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Pengpeng Ding, Yunhua Yao, Dalong Qi, Chengshuai Yang, Fengyan Cao, Yilin He, Jiali Yao, Chengzhi Jin, Zhengqi Huang, Li Deng, Lianzhong Deng, Tianqing Jia, Jinyang Liang, Zhenrong Sun, and Shian Zhang "Single-shot spectral-volumetric compressed ultrafast photography," Advanced Photonics 3(4), 045001 (18 June 2021). https://doi.org/10.1117/1.AP.3.4.045001
Received: 16 March 2021; Accepted: 25 May 2021; Published: 18 June 2021
Lens.org Logo
CITATIONS
Cited by 14 scholarly publications.
Advertisement
Advertisement
KEYWORDS
3D image processing

Ultrafast imaging

Photography

Luminescence

Spatial resolution

Optical imaging

Streak cameras

Back to Top