Compressed ultrafast photography (CUP) is a burgeoning single-shot computational imaging technique that provides an imaging speed as high as 10 trillion frames per second and a sequence depth of up to a few hundred frames. This technique synergizes compressed sensing and the streak camera technique to capture nonrepeatable ultrafast transient events with a single shot. With recent unprecedented technical developments and extensions of this methodology, it has been widely used in ultrafast optical imaging and metrology, ultrafast electron diffraction and microscopy, and information security protection. We review the basic principles of CUP, its recent advances in data acquisition and image reconstruction, its fusions with other modalities, and its unique applications in multiple research fields.

## 1.

## Introduction

Researchers and photographers have long sought to unravel transient events on an ultrashort time scale using ultrafast imaging. From the early observations of a galloping horse^{1} to capturing the electronic motions in nonequilibrium materials,^{2} this research area has continuously developed for over 140 years. Currently, with the aid of subfemtosecond (fs, ${10}^{-15}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{s}$) lasers^{3}^{,}^{4} and highly coherent electron sources,^{5}^{,}^{6} it is possible to simultaneously achieve attosecond (as, ${10}^{-18}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{s}$) temporal resolution and subnanometer (nm, ${10}^{-9}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{m}$) spatial resolution.^{7}^{,}^{8} Ultrafast imaging holds great promise for advancing science and technology, and it has already been widely used in both scientific research and industrial applications.

Ultrafast imaging approaches can be classified into stroboscopic and single-shot categories. For transient events that are highly repeatable, reliable pump-probe schemes are used to explore the underlying mechanisms. Unfortunately, this strategy becomes ineffective in circumstances with unstable and even irreversible dynamics, such as optical rogue waves,^{9} irreversible structural dynamics in chemical reactions,^{10}^{,}^{11} and shock waves in inertial confinement fusion.^{12} To overcome this technical limitation, a variety of single-shot ultrafast imaging techniques with the ability to visualize the evolution of two-dimensional (2-D) spatial information have been proposed.^{13} Based on their methods of image formation, these imaging techniques can be further divided into two categories. One is the direct imaging without the aid of computational processing, such as ultrafast framing/sampling cameras,^{14} femtosecond time-resolved optical polarimetry,^{15} and sequentially timed all-optical mapping photography (STAMP).^{16} The other category is reconstruction imaging, in which dynamic scenes are extracted or recovered from the detected results by specific computational imaging algorithms, including holography,^{17} tomography,^{18} and compressed sensing (CS)-based photography.^{19}^{,}^{20} As summarized in Ref. 13, although direct imaging methods are still important and reliable for capturing transient events in real time, an increasing number of reconstruction imaging approaches have achieved substantial progress in various specifications, such as imaging speed, number of pixels per frame, and sequence depth (i.e., frames per shot).

Among the various reconstruction imaging modalities, compressed ultrafast photography (CUP) advantageously combines the super-high compression ratio of sparse data achieved by applying CS and the ultrashort temporal resolution of streak camera techniques. CUP has achieved a world record imaging speed of 10 trillion frames per second (Tfps), as well as a sequence depth of hundreds of frames simultaneously with only one shot.^{19} Moreover, a series of ultrafast diffractive and microscopic imaging schemes with electron and x-ray sources have been proposed to extend the modality from optics to other domains. In recent years, CUP has emerged as a promising candidate for driving next-generation single-shot ultrafast imaging.

Covering recent research outcomes in CUP and its related applications since its first appearance in 2014,^{19} this review introduces and discusses state-of-the-art imaging techniques, including their principles and applications. The subsequent sections are arranged as follows. In Sec. 2, we describe the working principle of CUP and discuss mathematical models of the data acquisition and the image reconstruction processes. In Secs. 3 and 4, we review technical improvements and extensions of this technique so far, respectively. The technical improvements are explained with regard to the data acquisition and the image reconstruction, whereas the technical extensions of CUP combine a variety of techniques. In Sec. 5, related applications of CUP are described and discussed, not only for optical measurements but also for information security protection. Finally, we conclude this review in Sec. 6 and speculate on future research directions.

## 2.

## Working Principle of CUP

A CUP experiment can be completed in two steps: data acquisition and image reconstruction. A simple experimental diagram for data acquisition is shown in Fig. 1.^{19} A dynamic scene is first imaged on a digital micromirror device (DMD) by a camera lens and a $4f$ imaging system consisting of a tube lens and a microscope objective, and then it is encoded in the spatial domain by the DMD. Subsequently, the encoded dynamic scene reflected from the DMD is collected by the same $4f$ imaging system. Finally, it is deflected and measured by a streak camera.

A DMD consists of hundreds of thousands of micromirrors; each of which can be individually rotated with $\pm 12\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{deg}$ that represents an on or off state.^{21} When a pseudorandom binary code is loaded onto a DMD, these micromirrors can be turned on or off accordingly, thus a dynamic scene projected on the DMD can be spatially encoded. In CUP acquisition, the entrance slit of the streak camera is fully opened ($\sim 5\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{mm}$), and a scanning control module in the streak camera provides a sweeping voltage that linearly deflects the photoelectrons induced by the dynamic scene according to their arrival times. The temporally sheared image operated by the streak camera is captured by a CCD in a single exposure. In the CUP experiment, the pseudorandom binary code generated by the DMD is fixed. Consequently, data acquisition is divided into three steps, which can be described by a forward model. As shown in Fig. 2(a), this procedure can be mathematically described as follows: the three-dimensional (3-D) dynamic scene $I(x,y,t)$ is first imaged onto an intermediate plane, on which the intensity distribution of the intermediate image is identical to that of the original scene under the assumption of ideal optical imaging with unit magnification. The intermediate image is successively processed by a mask containing pseudorandomly distributed, squared, and binary-valued elements at the intermediate image plane. The image intensity distribution after this operation is formulated as

## Eq. (1)

$${I}_{c}(x,y,t)=\sum _{i,j}I(x,y,t){C}_{i,j}\mathrm{rect}[\frac{x}{d}-(i+\frac{1}{2}),\frac{y}{d}-(j+\frac{1}{2}\left)\right].$$## Eq. (3)

$$E({x}^{\prime},{y}^{\prime})=\int \mathrm{d}t\int \mathrm{d}x\int \mathrm{d}y{I}_{s}(x,y,t)\mathrm{rect}[\frac{x}{d}-({x}^{\prime}+\frac{1}{2}),\frac{y}{d}-({y}^{\prime}+\frac{1}{2}\left)\right].$$## Eq. (4)

$$I(x,y,t)\approx \sum _{i,j,\tau}{I}_{i,j,\tau}\mathrm{rect}[\frac{x}{d}-(i+\frac{1}{2}),\frac{y}{d}-(j+\frac{1}{2}),\frac{t}{{\mathrm{\Delta}}_{t}}-(\tau +\frac{1}{2}\left)\right],$$## Eq. (5)

$$E({x}^{\prime},{y}^{\prime})=\frac{{d}^{3}}{v}\sum _{\tau =0}^{{y}^{\prime}-1}{C}_{{x}^{\prime},{y}^{\prime}-\tau}{I}_{{x}^{\prime},{y}^{\prime}-\tau ,\tau},$$It is noteworthy that, given a coded mask with dimensions of ${N}_{x}\times {N}_{y}$, the input scene $I(x,y,t)$ can be voxelized into a matrix form with dimensions ${N}_{x}\times {N}_{y}\times {N}_{t}$ under the assumption of ideal optical imaging with unit magnification, where ${N}_{x}$, ${N}_{y}$, and ${N}_{t}$ are the numbers of voxels along $x$, $y$, and $t$, respectively. Therefore, the measured $E({x}^{\prime},{y}^{\prime})$ has dimensions ${N}_{x}\times ({N}_{y}+{N}_{t}-1)$, and the spatial resolution of CUP is mainly determined by ${N}_{x}$ and ${N}_{y}$ or the mask pixel size, $d$, while the temporal resolution is restricted by ${N}_{t}$, which is related to the shearing velocity of streak camera, $v$.

Given the prior knowledge of the forward model, the image reconstruction tries to estimate the unknown dynamic scene $I(x,y,t)$ from the captured 2-D image $E({x}^{\prime},{y}^{\prime})$ by solving the linear inverse problem of Eq. (6). The number of elements in the 3-D dynamic scene $I(x,y,t)$ is approximately two orders of magnitude larger than that in the 2-D image $E({x}^{\prime},{y}^{\prime})$.^{19} Therefore, the inverse problem of Eq. (6) is underdetermined, which suffers from great uncertainty to reconstruct the real result of $I(x,y,t)$ from $E({x}^{\prime},{y}^{\prime})$ by a traditional approach based on the parameters $\mathbf{T}$, $\mathbf{S}$, and $\mathbf{C}$. CUP introduces CS theory to solve this problem.^{22}^{,}^{23} Here, CS makes full use of the sparsity of $I(x,y,t)$ in a certain domain to recover the original scene. Sparsity in a certain domain means that most elements are zeros, whereas only a few elements are nonzeros. Consider the case where $I(x,y,t)$ has $n$ elements in the original domain and $s$ nonzero elements in the sparse domain, whereas $E({x}^{\prime},{y}^{\prime})$ has $m$ elements, where $n\gg s$ and $n>m>s$. The fact that $m$ is larger than $s$ makes it possible to solve the inverse problem of Eq. (6). To practically solve this problem, CUP finds the best $I(x,y,t)$ using a CS algorithm in a certain sparse domain under the condition of Eq. (6), which is shown as

## Eq. (7)

$$\underset{I}{\mathrm{min}\text{\hspace{0.17em}}}\mathrm{\Phi}[I(x,y,t)]\phantom{\rule[-0.0ex]{1.0em}{0.0ex}}\text{subject to}\text{\hspace{0.17em}\hspace{0.17em}}E({x}^{\prime},{y}^{\prime})=\mathbf{O}I(x,y,t),$$^{23}

^{–}

^{25}the original $I(x,y,t)$ can be completely recovered when where $f$ is a constant that is correlated with the number of elements $n$, and $\mu $ is the mutual coherence between the sparse basis of $I(x,y,t)$ and the measurement matrix that is dependent on the operators $\mathbf{C}$, $\mathbf{S}$, and $\mathbf{T}$. To recover $I(x,y,t)$, CUP first sets its initial guess as a point in an $n$-dimensional space, denoted as ${I}^{0}$. Starting from the initial point ${I}^{0}$, the CS algorithm can search for the destination point ${I}^{L}$, and this optimization process can be described as follows: the intermediate point ${I}^{i}$ is updated in each iteration until ${I}^{i}$ reaches the proximity of ${I}^{L}$, as shown in Fig. 2(b). In addition, the search paths should obey Eq. (7), but they will be different in different CS algorithms. For these search paths, there exist at least five major classes of computational techniques: greedy pursuit, convex relaxation, Bayesian framework, nonconvex optimization, and brute force. The details can be found in Ref. 26. For different CS algorithms, the final ${I}^{L}$ points are different, which indicates that an optimal CS algorithm exists. The difference between ${I}^{L}$ and the original $I$ can be utilized as the standard for judging the algorithm’s quality. Last but not least, noise always exists in experimental data. Moreover, Eq. (8) is often unsatisfied due to the large compression ratio caused by transforming the 3-D data cube $I(x,y,t)$ into the 2-D data $E({x}^{\prime},{y}^{\prime})$, as shown in Fig. 2(a). Therefore, Eq. (7) can be further written as

## Eq. (9)

$$\underset{I}{\mathrm{min}}\text{\hspace{0.17em}}\mathrm{\Phi}[I(x,y,t)]\phantom{\rule[-0.0ex]{1.0em}{0.0ex}}\text{subject to}\text{\hspace{0.17em}\hspace{0.17em}}{\Vert E({x}^{\prime},{y}^{\prime})-\mathbf{O}I(x,y,t)\Vert}_{2}<\delta ,$$Based on the data acquisition and image reconstruction procedures with a single-shot operation, the CUP system with the configuration shown in Fig. 1 can achieve an imaging speed as high as 100 billion fps and a sequence depth of 350. For each frame, the spatial resolution is $\sim 0.4$ line pairs per mm in a $50\text{-}\mathrm{mm}\times 50\text{-}\mathrm{mm}$ field of view (FOV). CUP has itself empowered outstanding performance as a single-shot ultrafast imaging technique, but many technical improvements have emerged.

## 3.

## Technical Improvements in CUP

In this section, we review recent technical improvements in the CUP technique from two aspects of CUP’s experimental operation. In Sec. 3.1, we discuss a few strategies for data acquisition inspired by Eq. (8), as well as the fastest CUP system with a streak camera, which has femtosecond temporal resolutions. In Sec. 3.2, we review improvements in image reconstruction algorithms.

## 3.1.

### Improvements in Data Acquisition

Equation (8) holds the key to improving data acquisition. For a given dynamic scene $I(x,y,t)$, the coefficient of nonzero elements in the original domain, $f$, and the number of nonzero elements in the sparse domain, $s$, are constants. Fortunately, improvements can be realized by reducing the mutual coherence, $\mu $, or increasing the measured number of elements, $m$. Based on this principle, a few novel approaches have been proposed.

## 3.1.1.

#### Reducing the mutual coherence

The parameter $\mu $ represents the mutual coherence between the sparse basis of $I(x,y,t)$ and the measurement matrix. The measurement matrix mainly depends on the encoding operator $\mathbf{C}$ (i.e., the random codes on the DMD), which indicates that CUP performance can be improved by optimizing the random codes. Yang et al. adopted a genetic algorithm (GA) to optimize the codes.^{27} The GA is designed to self-adaptively find the optimal codes in the search space and eventually obtain the global solution. Utilizing the optimized codes, CUP needs three steps to recover a dynamic scene, as shown in Fig. 3(a). First, a dynamic scene is set as the optimization target. This scene can be different from the real dynamic scene but must maintain the same sparse basis. This optimization target scene consists of the images reconstructed by CUP using the random codes. Second, the GA is utilized to optimize the codes according to the optimization target, which represents the reconstructed images in step I. These reconstructed images constitute a simulated scene, which is then recovered by computer simulation with many sets of random codes. Each set of random codes is regarded as an individual, and these individuals constitute a group. Here, the GA simulates biological evolution to find the optimal codes. The details can be found in Ref. 27. Finally, using the optimal codes obtained in step II, CUP records the dynamic scene for a second time.

Figures 3(b) and 3(c) show reconstructed results using the optimal codes and random codes, respectively. Here, the dynamic scene is a time- and space-evolving laser pulse with a pulse width of 3 ns and a central wavelength of 532 nm, and the laser spot is divided into two components in space by a thin wire. The result obtained by the optimal codes has less noise and is more distinct in the spatial profile than that obtained by the random codes. However, a total of three steps have been performed for one optimization, so this method demands that the dynamic scene be repeatable twice. For some nonrepeatable scenes, a similar dynamic scene should be found in advance, and it should have the same sparse basis as the real dynamic scene.^{30}^{–}^{32} One point to note is that the decrease in $\mu $ realized by optimizing the codes using the GA is somewhat limited, because it can only optimize operator $\mathbf{C}$; it is invalid to other operators that constitute the measurement matrix.

## 3.1.2.

#### Increasing the number of elements, m

As shown in Eq. (8), the parameter $m$ represents the number of elements in $E({x}^{\prime},{y}^{\prime})$. For a given dynamic scene, $m$ is a constant if the scene is encoded by a single set of random codes, as is shown in Fig. 2(a). To increase $m$, more sets of random codes can be utilized to simultaneously encode the dynamic scene: this method is called multiencoding CUP.^{29} In this method, as shown in Fig. 4, an ultrafast dynamic scene is divided into several replicas, and each replica is encoded by an independent encoding mask. Finally, these replicas are individually imaged after temporal shearing. Thus, Eq. (5) can be further formulated in matrix form as

## Eq. (10)

$$\left[\begin{array}{c}{E}_{1}({x}^{\prime},{y}^{\prime})\\ {E}_{2}({x}^{\prime},{y}^{\prime})\\ \vdots \\ {E}_{k}({x}^{\prime},{y}^{\prime})\end{array}\right]=\left[\begin{array}{c}\mathbf{T}\mathbf{S}{\mathbf{C}}_{1}\\ \mathbf{T}\mathbf{S}{\mathbf{C}}_{2}\\ \vdots \\ \mathbf{T}\mathbf{S}{\mathbf{C}}_{k}\end{array}\right]I(x,y,t),$$Coincidently, the lossless encoding CUP (LLE-CUP) proposed by Liang et al.^{33} can also be regarded as a method to increase $m$. There are three views in LLE-CUP: the dynamic scene in two of the views is encoded by complementary codes and is then sheared and integrated by the streak camera, and the results are called the sheared views. The dynamic scene in the third view is just integrated by an external CCD, and the result is called the unsheared view. Mathematically, only the sheared views have an effect on extracting the 3-D datacube from the compressed 2-D image, whereas the unsheared view is used to restrict the space and intensity of the reconstructed image. In this method, the sheared views provide different codes for each acquisition channel, and each image is reconstructed by its own codes. Nevertheless, the unsheared view still improves the image reconstruction quality in some situations, and it is adopted in a few approaches.^{34}^{,}^{35}

It is worth mentioning that both schemes can effectively improve CUP’s performance by increasing the sampling rate. Moreover, as demonstrated in Ref. 29, a multiencoding strategy can break through the original temporal resolution limitation of the temporal deflector (e.g., the streak camera), which was formerly considered a restriction on the frame rate of a CUP system. The reconstructed image quality is significantly improved by increasing $m$, but the spatial resolution may decrease when the CCD is divided into several subareas to image the dynamic scenes in these channels. However, if the channels are arranged sophisticatedly or the FOV can be sacrificed, the spatial resolution can reach a balanced value. Moreover, the synchronization between different channels is very crucial for realizing the achievable temporal resolution.

## 3.1.3.

#### Fastest CUP system

One of the most important characteristics of a CUP system is its ultrafast imaging speed, which is definitively determined by the temporal deflector. Based on a prototype of CUP, Liang et al. recently established a trillion-frame-per-second compressed ultrafast photography (T-CUP) system and realized real-time, ultrafast, passive imaging of temporal focusing with 100-fs frame intervals in a single camera exposure.^{34} A diagram of the T-CUP system is shown in Fig. 5. Similar to the first-generation CUP system (Fig. 1), it performs data acquisition and image reconstruction, but an external CCD is installed on the other side of the beam splitter. A 3-D spatiotemporal scene is first imaged by the beam splitter to form two replicas. The first replica is directly recorded by the external CCD by temporally integrating it over the entire exposure time. In addition, the other replica is spatially encoded by a DMD, and then sent to a femtosecond streak camera with the highest temporal resolution of 200 fs, where the entrance slit is fully opened, and the scene is sheared along one spatial axis and recorded by a detector. With the aid of reconstruction algorithms to solve the minimization problem, one can obtain a time-lapse video of the dynamic scene with a frame rate as high as 10 Tfps. In a CUP system, the imaging speed mainly depends on the temporal resolution of the streak camera. Therefore, by varying the temporal shearing velocity of the streak camera, the frame rate can be widely varied from 0.5 to 10 Tfps, with corresponding T-CUP temporal resolutions from 6.34 to 0.58 ps.

Foremost, it is noteworthy that all reconstructed scenes using the T-CUP system are accurate to 100 fs in frame interval, with a sequence depth (i.e., number of frames per exposure) of more than 300. To the best of our knowledge, this is the world’s best combination of imaging speed and sequence depth. On the other hand, it should be noted that a streak camera needs photon-to-electron and electron-to-photon conversion for 2-D imaging, and this limitation confines the pixel count of each reconstructed image to tens of thousands.

## 3.2.

### Improvements in Image Reconstruction

Because CS theory is key to CUP, efforts to improve the reconstruction algorithms for performance improvement have been important. One example is optimizing the search path to seek a better CS algorithm, which has resulted in the proposed use of the augmented Lagrangian (AL) algorithm.^{36} In addition, an alternative scheme is confining the search path within a certain scope, which is called the space- and intensity-constrained (SIC) reconstruction method.^{37}

## 3.2.1.

#### AL-based reconstruction algorithm

Heretofore, all CS algorithms for CUP were based on total variation (TV) minimization, which is a convex relaxation technique. TV minimization can make the recovered image quality sharp by preserving boundaries more accurately,^{38}^{,}^{39} which is essential to characterize the reconstructed images. The original tool for image reconstruction in CUP was a two-step iterative shrinkage/thresholding (TwIST) algorithm.^{28} The TwIST algorithm is a quadratic penalty function method and can transform Eq. (9) into

## Eq. (11)

$$\underset{I}{\mathrm{min}}\{\mathrm{\Phi}{[I(x,y,t)]}_{\mathrm{TV}}+\frac{\beta}{2}{\Vert E({x}^{\prime},{y}^{\prime})-\mathbf{O}I(x,y,t)\Vert}_{2}^{2}\},$$## Eq. (12)

$$\underset{I}{\mathrm{min}}\{\mathrm{\Phi}{[I(x,y,t)]}_{\mathrm{TV}}-\lambda [E({x}^{\prime},{y}^{\prime})-\mathbf{O}I(x,y,t)]\},$$^{40}To avoid this problem, an AL function method has been proposed,

^{39}presented as

## Eq. (14)

$$\underset{I}{\mathrm{min}}\{\mathrm{\Phi}[I(x,y,t)]-\gamma [E({x}^{\prime},{y}^{\prime})-\mathbf{O}I(x,y,t)]+\frac{\beta}{2}{\Vert E({x}^{\prime},{y}^{\prime})-\mathbf{O}I(x,y,t)\Vert}_{2}^{2}\},$$To further validate the improvement in the image reconstruction quality, a superluminal propagation of noninformation was recorded in Ref. 36. As shown in Fig. 6(a), a femtosecond laser pulse obliquely illuminates a transverse stripe pattern at an angle of $\sim 38\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{deg}$ with respect to the surface normal, and a CUP camera is vertically positioned for recording. Figures 6(b) and 6(c) show the experimental results reconstructed by the AL algorithm and the TwIST algorithm, respectively. Clearly, the images reconstructed by the TwIST algorithm have more artifacts, whereas those reconstructed by the AL algorithm are more faithful to the true situation. The AL algorithm opens up new approaches to solving this inverse problem, such as gradient projection for sparse reconstruction.^{41} In the near future, more studies will surely be carried out to further optimize image reconstruction algorithms.

## 3.2.2.

#### Space- and intensity-constrained reconstruction

A scheme proposed by Zhu et al. confines the search path within certain scopes and is accordingly named the SIC reconstruction algorithm.^{37} This method operates in a spatial zone *M*, and the values of pixels outside of this region are set as zeros. The spatial zone *M* is extracted from the unsheared spatiotemporally integrated image of the dynamic scene, recorded by an external CCD, which is similar to the hardware configuration in Sec. 3.1.3. In addition, the values of pixels less than the intensity threshold $s$, even in zone *M*, are set to zero. By using the penalty function framework, the SIC reconstruction algorithm can be written as

## Eq. (16)

$$\underset{I\in M,I>s}{\mathrm{min}}\{\mathrm{\Phi}{[I(x,y,t)]}_{\mathrm{TV}}+\frac{\beta}{2}{\Vert E(x,y)-\mathbf{O}I(x,y,t)\Vert}_{2}^{2}\}.$$*M*is chosen by an adaptive local thresholding algorithm and a median filter. The threshold is chosen from a couple of candidates between 0 and 0.01 times of the maximal pixel value. The criterion for these values is the minimal root-mean-square error between the reconstructed integrated images obtained by the algorithm and the unsheared integrated image obtained by the external CCD.

To demonstrate the advantages of the SIC reconstruction method, a picosecond laser pulse propagation was captured by a derivative of the primary CUP system, and the reconstructed results by the unconstrained (i.e., TwIST) and constrained (i.e., SIC) algorithms are shown in Figs. 7(a) and 7(b), respectively. Clearly, the SIC reconstructed image maintains sharper boundaries than the TwIST reconstructed image. Moreover, the normalized intensity profiles in Fig. 7(c) further show that the spatial and temporal resolutions are simultaneously improved using the SIC algorithm.

## 4.

## Technical Extensions of CUP

In mathematical models of CUP, CS offers a scheme that allows the underdetermined reconstruction of sparse scenes.^{42} Since CUP uses a linear and undersampled imaging system, such a model can be flexibly extended to other systems to address their limitations. Three representative works in recent years are presented here to inspire researchers. The first extension, described in Sec. 4.1, originates from the combination of CUP and STAMP to realize ultrafast spectral–temporal photography based on CS. Next, similar to its usage in the CUP system, CS is introduced into microscopic systems based on electron sources to explore ultrafast structural dynamics in a single shot, as reviewed in Sec. 4.2. Finally, a novel all-optical ultrahigh-speed imaging strategy that does not employ a streak camera is described and discussed in Sec. 4.3.

## 4.1.

### Compressed Ultrafast Spectral–Temporal Photography

As introduced in Sec. 1, both direct and computational imaging techniques have achieved remarkable progress in recent years, but they have seemed to develop independently and without intersection. In the early 2019, Lu et al. creatively proposed a new compressed ultrafast spectral–temporal (CUST) photography system^{43} by merging the modalities of CUP and STAMP. Combining the advantages of these two ultrafast imaging systems, the CUST system, shown schematically in Fig. 8, provides both an ultrahigh frame rate of 3.85 Tfps and a large number of frames. The CUST system consists of three modules: a spectral-shaping module (SSM), a pulse-stretching module (PSM), and a so-called “compressed camera.” In the SSM, a femtosecond laser pulse passes through a pair of gratings and a pulse shaping system with a $4f$ configuration. On the Fourier plane of the $4f$ system, a slit is positioned to select a designated spectrum of the femtosecond pulse. In the PSM, the femtosecond pulse is stretched by another pair of gratings to generate a stretched picosecond-chirped pulse as illumination. The “compressed camera” is similar to that in the CUP system, but the main difference is that the streak camera is replaced by a grating to disperse the spatially encoded event at different wavelengths. Because the illumination pulse is chirped linearly, it ensures a one-to-one linear relationship between the temporal and wavelength information. Finally, a CS algorithm is employed to reconstruct the dynamic scene, much as in CUP. By recording ultrafast spectrally resolved images of an object, the CUST system can acquire 60 spectral images with a 0.25-nm spectral resolution on approximately a picosecond timescale.

The temporal solution of the CUST technique mainly depends on the chirp ability of the pulse-shaping system and the spectral resolution of the compressed camera; therefore, the imaging speed can be flexibly adjusted by tuning the grating components. In comparison to STAMP, CUST offers more frames. However, since the CUST system uses a chirped pulse as illumination, it cannot measure a self-emitting event, such as fluorescence, or the color of the object.

## 4.2.

### Compressed Ultrafast Electron Diffraction and Microscopy

Understanding the origins of many ultrafast microscopic phenomena requires probing technologies that provide both high spatial and temporal resolution simultaneously. Based on the inherent limitations of the elementary particles in the imaging processes, photons and electrons have accounted for the most powerful imaging tools, but the two are dissimilar in terms of the spatial and temporal domains they can access. Photons can be used for extremely high (up to attosecond) temporal studies, whereas accelerated electrons excel in forming images with the highest spatial resolution (sub-angstrom) achieved so far. In recent decades, many researchers have focused on merging conventional electron diffraction and microscopy systems with ultrafast lasers, and a variety of structurally related dynamics have been explored. Unfortunately, these systems still suffer from the limitations of multiple-shot measurements and synchronization-induced timing jitter.

To overcome the limitations in this research field, solutions have been proposed based on the methodology of CUP. Qi et al. proposed a new theoretical design, named compressed ultrafast electron diffraction imaging (CUEDI),^{44} which, for the first time, subtly combines an ultrafast electron diffraction (UED) system with the CUP modality. As shown in Fig. 9(a), by utilizing a long-pulsed laser to generate the probe electron source and inserting an electron encoder between the sample and the streak electric field, CUEDI completes the measurement in a single-shot, which eliminates the relative time jitter between the pump and probe beams. In addition, Liu et al. in 2019 added CS to a laser-assisted transmission electron microscopy (TEM) setup to create two related novel schemes, named single-shearing compressed ultrafast TEM (CUTEM) and dual-shearing CUTEM (DS-CUTEM),^{45} which are shown in Figs. 9(b) and 9(c), respectively. In each scheme, the projected transient scene experiences encoding and shearing before reaching the detector array. However, an additional pair of shearing electrodes is inserted before the encoding mask in the DS-CUTEM scheme, which is used to shear the dynamic scene in advance, and thus a more incoherent measurement matrix is generated by the encoding mask. Therefore, the mutual coherence of the scene in DS-CUTEM is even smaller than that in CUTEM. Based on these analytical models and simulated results, single-shot ultrafast electronic microscopy with subnanosecond temporal resolution could be realized by integrating CS-aided ultrafast imaging modalities with laser-assisted TEM.

## 4.3.

### Compressed Optical-Streaking Ultrahigh-Speed Photography

Because a streak camera is used in previous CUP systems, photon–electron–photon conversion cannot be avoided, thus deteriorating the reconstructed image quality in each frame. To overcome this limitation, Liu et al. in 2019 developed single-shot compressed optical-streaking ultrahigh-speed photography (COSUP),^{46} which is a passive-detection computational imaging modality with a 2-D imaging speed of 1.5 million fps (Mfps), a sequence depth of 500, and a pixel count of $1000\times 500$ per frame. In the COSUP system, the temporal shearing device is a galvanometer scanner (GS), not a streak camera. As shown in Fig. 10, the GS is placed at the Fourier plane of the $4f$ system, and, according to the arrival time, it temporally shears the spatially encoded frames linearly to different spatial locations along the $x$-axis of the camera. Moreover, COSUP and CUP share the same mathematical model.

Compared with CUP, the temporal resolution of the COSUP system is much lower since it is currently limited by the linear rotation voltage of the GS. However, because COSUP avoids the electronic process in a streak camera, its spatial resolution is over 20 times higher. Importantly, the ingenious design of optical streaking provides a new approach for improving the spatial resolution of CUP-like systems, for example, with optical Kerr effect gates and Pockels effect gates. Moreover, because of the simplification of components in COSUP, it provides a cost-effective alternative way to perform ultrahigh-speed imaging. For the future, a slower COSUP system combined with a microscope holds great potential to enable such bioimaging feats as high-sensitivity optical neuroimaging of action potential propagation and using nanoparticles for wide-field temperature sensing in tissue.^{47}^{–}^{49}

## 5.

## Applications of CUP

As explained in the previous sections, by synergizing CS and streak imaging, the CUP technique can realize single-shot ultrafast optical imaging in receive-only mode. In recent years, manifold improvements in this technique have enabled the direct measurement of many complex phenomena and processes that were formerly inaccessible to ultrafast optics. Several representative areas of investigation are reviewed in this section, including capturing the flight of photons, imaging at high speed in 3-D, recording the spatiotemporal evolution of ultrashort pulses, and enhancing image information security.

## 5.1.

### Capturing Flying Photons

The capture of light during its propagation is a touchstone for ultrafast optical imaging techniques, and a variety of schemes have been proposed to accomplish it, including CUP. Using the first-generation CUP system described in Sec. 2, Gao et al. demonstrated the basic principles of light propagation by imaging laser pulses reflecting, refracting, and racing in different media in real time for the first time. Further, they modified the setup with a dichroic filter design to develop the spectrally resolvable CUP shown in Fig. 11(a) and successfully recorded the pulsed-laser-pumped fluorescence emission process of rhodamine.^{19} These results are shown in Fig. 11(b). With the creation of LLE-CUP, described in detail in Sec. 3.1.2, Liang et al. recorded a photonic Mach cone propagating in scattering material for the first time,^{33} presenting the formation and propagation images shown in Fig. 11(c). The experimental results are in excellent agreement with theoretical predictions by time-resolved Monte Carlo simulation.^{50}^{–}^{52} In addition, although the propagation of photonic Mach cones had been previously observed via pump-probe methods,^{53}^{,}^{54} this was the first time that a single-shot, real-time observation of traveling photonic Mach cones induced by scattering was achieved. By capturing light propagation in scattering media in real-time, CUP demonstrated great promise for advancing biomedical instrumentation for imaging scattering dynamics.^{55}^{–}^{57}

## 5.2.

### Recording Three-Dimensional Objects

3-D imaging is used in many applications,^{58}^{–}^{67} and numerous techniques have been developed, including structured illumination,^{68}^{,}^{69} holography,^{70} streak imaging,^{71}^{,}^{72} integral imaging,^{73} multiple camera or multiple single-pixel detector photogrammetry,^{74}^{,}^{75} and time-of-flight (ToF) detection based on kinect sensors^{76} and single-photon avalanche diodes.^{77}^{,}^{78} Recently, these 3-D imaging techniques have been increasingly challenged to capture information fast.

ToF detection is a common method of 3-D imaging that is based on collecting scattered photons from multiple shots of objects carrying a variety of tags. Although it offers high detection sensitivity, multiple-shot acquisition still falls short in imaging fast-moving 3-D objects. To overcome this difficulty, single-shot ToF detection approaches have been developed.^{79}^{–}^{84} However, within the limited imaging speeds of CMOS cameras and the illuminating pulse widths, 3-D imaging speeds have been limited to $\sim 30\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{Hz}$, with a depth resolution of $\sim 10\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{cm}$. Liang et al. developed a new 3-D imaging system, named ToF-CUP,^{35} that satisfied the single-shot requirement in ToF detection using a CUP device. In the ToF-CUP system, the CUP camera detects the photons backscattered from a 3-D object illuminated by a laser pulse. By calculating the times of the round-trip ToF signals between the illuminated surface and detector, the relative depths of the light incidence points on the object’s surface can be recovered. The experimental results for two static models and a dynamic two-ball rotation are shown in Fig. 12. Especially in dynamic detection, the ToF-CUP system captured the rotation of this two-ball system by sequentially acquiring images at a speed of 75 volumes per second. Each image was reconstructed to a 3-D $(x,y,z)$ datacube, and these datacubes were further formed into a time-lapse four-dimensional $(x,y,z,t)$ datacube.

The ToF-CUP system is an ingenious variation of the CUP system and exhibits markedly superior performance in imaging speed (75 Hz) and depth resolution (10 mm) for single-shot 3-D imaging. It is believed that the superiority of CUP can effectively advance existing 3-D imaging technologies beyond present bottlenecks. Based on ToF-CUP, more 3-D CUP systems will be proposed in the future, such as combining a CUP camera with a structured illumination system or a holographic system. Given the ability of ToF-CUP in 3-D imaging, it is promising to be widely used in bioimaging, remote sensing, machine vision, and so on.

## 5.3.

### Measuring the Spatiotemporal Intensity of Ultrashort Laser Pulses

The spatiotemporal measurement of ultrashort laser pulses provides important reference values for studies in ultrafast physics, such as explorations of second harmonic generation. In studying such physical processes, the characteristics of an ultrashort laser pulse are quite important, including its frequency information, temporal information, spatial information, and their interrelationship. However, most technologies for laser pulse measurement provide only temporal intensity information without spatial resolution, including optical autocorrelators, devices using spectral phase interferometry for direct electric-field reconstruction,^{85} and frequency-resolved optical gating devices.^{86} Since these mainstream techniques are generally limited by direct integration over the transverse coordinates, they can obtain only the temporal information of ultrashort laser pulses.

To extend the information that can be achieved from one image, the CUP technique was employed to simultaneously explore the spatiotemporal information of laser pulses with multiple wavelength components.^{87} A Ti:sapphire regenerative amplifier and a barium borate crystal were used to generate picosecond laser pulses with a fundamental central wavelength of 800 nm and second harmonics of 400 nm. Consequently, the spatiotemporal intensity evolution of the generated dual-color picosecond laser field was obtained as shown in Fig. 13(a). Clearly, CUP precisely obtained not only the pulse durations and spatial evolution of subpulses but also the time delay between them.

In a related effort, Liang et al. utilized the T-CUP system introduced in Sec. 3.1.3 to realize real-time, ultrafast, passive imaging of temporal focusing.^{34} There are two major features in temporal focusing: the shortest pulse width locates at the focal plane of the lens^{88} and the angular dispersion of the grating induces a pulse front tilt.^{89} To observe the phenomenon experimentally, a typical temporal focusing scenario of a femtosecond laser pulse was generated with a diffractive grating and a $4f$ imaging system. Actually, the pulse front tilt in this experiment was determined by the overall magnification ratio of the $4f$ system, the central wavelength of the ultrashort pulse, and the grating period.^{90}^{,}^{91} From the front-view and side-view detections, the T-CUP system respectively recorded the impingement of the tilted laser pulse front sweeping along the $y$-axis of the temporal focusing plane and the full evolution of the pulse propagation across this focusing plane, as shown in Fig. 13(b).

Compared with other single-shot ultrafast imaging techniques,^{16}^{–}^{18}^{,} ^{92}^{–}^{96} it is obvious that T-CUP is currently the only technology capable of observing temporal focusing in real time. Unlike STRIPED FISH,^{97} CUP avoids the need for a reference laser pulse, which provides a simpler measurement system. Moreover, based on the spectral response of the streak camera, CUP can measure laser fields with multiple wavelengths covering a wide spectral range. Thus, CUP clearly reveals the complex evolutions of ultrafast dynamics, paving the way for single-shot characterization of ultrashort laser fields in different circumstances.

## 5.4.

### Protecting Image Information Security

Information and communication security are critical for national security, enterprise operations, and personal privacy, but the advent of supercomputers and future quantum computers has made it much easier to attack digital information in repositories and in transmission. Recently, a quantum key distribution (QKD) cryptographic technique was developed to protect information and communication security,^{98} and a series of studies have demonstrated that this cryptographic technique can maintain security in a variety of research fields.^{99}^{–}^{103} In contrast to traditional public-key cryptography methods, such as elliptic curve cryptography, the digital signature algorithm, and the advanced encryption standard, a QKD system uses quantum mechanics to guarantee secure communication by enabling two parties to produce a shared random secret key known only to them.^{98}^{,}^{104}^{,}^{105} However, the relatively low-key generation rate of QKD greatly limits the information transmission bandwidth.^{106}

To improve this limitation, Yang et al. developed a new hybrid classical-quantum cryptographic scheme by combining QKD and a CS algorithm that improves the information transmission bandwidth.^{107} This approach employs the mathematical model of CUP in Fig. 2, where the quantum keys generated by QKD were used to encrypt and decrypt compressed 3-D image information, and a CS theory was utilized to encode and decode ciphertext. As shown in Fig. 14, the quantum key generated by the QKD system is transmitted by the quantum channel, whereas the ciphertext encoded by the CS algorithm is transmitted by the classical channel. Because a CS algorithm is a nondeterministic polynomial hard (NP-hard) problem and this approach is looking for an approximate solution, the CS-QKD system can obtain higher encryption efficiency under low key generation rate conditions. Based on analyses of the normalized correlation coefficient in several attack trials, the CS-QKD scheme has been proven to effectively improve the information transmission bandwidth by a factor of approximately three and to ensure secure communication at a random code error rate of 3% and interception rate of $\sim 19.5\%$. This scheme improves the information transmission bandwidth of both the quantum and classical channels. Meanwhile, it enables evaluating the information and communication security in real time by monitoring the QKD system. Overall, this interdisciplinary study could advance the hybrid classical-quantum cryptographic technique to a new level and find practical applications in information and communication security.

## 6.

## Conclusions and Prospects

In this mini-review, we have focused on recent advances in CUP. In the evolution from its first implementations to current systems that can capture ultrafast optical events at imaging speeds as high as 10 Tfps, CUP has achieved an unprecedented ability to visualize irreversible transient phenomena with single-shot detection. In addition, a variety of technical improvements in both data acquisition and image reconstruction have strengthened the capabilities of this technique. Furthermore, by extending the CS model to existing methods such as STAMP, UED/UEM, and QKD, the CUP modality has shown multiple possibilities for fusion with other techniques to achieve remarkable improvements.

In just a few years, CUP has achieved the highest sequence depth and a rather high imaging speed among various single-shot ultrafast imaging techniques, but it still lags in spatial resolution or pixel counts per frame. To address this shortcoming, an all-optical design such as COSUP, a multiple-channel design such as LLE-CUP or multiencoding CUP, and a code optimization design such as a GA-assisted approach can be pursued. Inspired by these strategies, other schemes could be developed. For example, by combining an electro-optical deflector with the deflection angle acceleration technique, an all-optical design has a great chance to push the imaging speed beyond 100 billion fps. In addition, a spectrally resolved CUP scheme capable of resolving transient temporal–spatial–spectral information simultaneously can be realized by inserting spectral elements into current CUP systems. With regard to improving image quality, increasingly accurate and intelligent reconstruction algorithms are considered of great importance. Recently, with the continuous maturity of deep learning in artificial intelligence, this technology has been utilized in computational imaging methods, such as super-resolution imaging,^{108}^{,}^{109} lensless imaging,^{110}^{,}^{111} and ghost imaging.^{112} It will be a significant step forward when deep learning is employed with CUP to recover an event precisely and reliably. In addition, the domains in which the scene possesses higher sparsity can be explored to further improve the efficiency and robustness of this technique. There is every reason to expect further progress and additional applications of this rising methodology in the future.

## Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (Grant Nos. 91850202, 11774094, 11727810, 11804097, and 61720106009), the Science and Technology Commission of Shanghai Municipality (Grant Nos. 19560710300 and 17ZR146900), and the China Postdoctoral Science Foundation (Grant No. 2018M641958).

## References

**,” Phys. Rev. Lett., 96 (7), 073004 (2006). https://doi.org/10.1103/PhysRevLett.96.073004 PRLTAO 0031-9007 Google Scholar**

*Attosecond pump probe: exploring ultrafast electron motion inside an atom***,” Appl. Phys. B, 79 (6), 673 –677 (2004). https://doi.org/10.1007/s00340-004-1650-z Google Scholar**

*Generation of intense, carrier-envelope phase-locked few-cycle laser pulses through filamentation***,” Opt. Express, 25 (22), 27506 –27518 (2017). https://doi.org/10.1364/OE.25.027506 OPEXFF 1094-4087 Google Scholar**

*Streaking of 43-attosecond soft-x-ray pulses generated by a passively CEP-stable mid-infrared driver***,” Proc. Natl. Acad. Sci. U. S. A., 106 (26), 10558 –10563 (2009). https://doi.org/10.1073/pnas.0904912106 Google Scholar**

*Temporal lenses for attosecond and femtosecond electron pulses***,” Rev. Sci. Instrum., 86 (7), 073702 (2015). https://doi.org/10.1063/1.4926994 RSINAK 0034-6748 Google Scholar**

*Mega-electron-volt ultrafast electron diffraction at SLAC National Accelerator Laboratory***,” Nat. Phys., 14 (3), 252 –256 (2018). https://doi.org/10.1038/s41567-017-0007-6 NPAHAX 1745-2473 Google Scholar**

*Diffraction and microscopy with attosecond electron pulse trains***,” J. Phys. B, 51 (3), 032005 (2018). https://doi.org/10.1088/1361-6455/aaa183 Google Scholar**

*Attomicroscopy: from femtosecond to attosecond electron microscopy***,” Nature, 450 (7172), 1054 –1057 (2007). https://doi.org/10.1038/nature06402 Google Scholar**

*Optical rogue waves***,” Science, 302 (5649), 1382 –1385 (2003). https://doi.org/10.1126/science.1090052 SCIEAS 0036-8075 Google Scholar**

*An atomic-level view of melting using femtosecond electron diffraction***,” Science, 361 (6397), 64 –67 (2018). https://doi.org/10.1126/science.aat0049 SCIEAS 0036-8075 Google Scholar**

*Imaging CF3I conical intersection and photodissociation dynamics with ultrafast electron diffraction***,” Phys. Plasmas, 22 (11), 110501 (2015). https://doi.org/10.1063/1.4934714 PHPAEN 1070-664X Google Scholar**

*Direct-drive inertial confinement fusion: a review***,” Optica, 5 (9), 1113 –1127 (2018). https://doi.org/10.1364/OPTICA.5.001113 Google Scholar**

*Single-shot ultrafast optical imaging***,” Exp. Mech., 47 (4), 561 –579 (2007). https://doi.org/10.1007/s11340-006-9011-y EXMCAZ 0014-4851 Google Scholar**

*Assessment of high speed imaging systems for 2D and 3D deformation measurements: methodology development and validation***,” Appl. Opt., 53 (36), 8395 –8399 (2014). https://doi.org/10.1364/AO.53.008395 APOPAI 0003-6935 Google Scholar**

*High-frame-rate observation of single femtosecond laser pulse propagation in fused silica using an echelon and optical polarigraphy technique***,” Nat. Photonics, 8 (9), 695 –700 (2014). https://doi.org/10.1038/nphoton.2014.163 NPAHBY 1749-4885 Google Scholar**

*Sequentially timed all-optical mapping photography (STAMP)***,” IEEE J. Sel. Top. Quantum Electron., 18 (1), 479 –485 (2012). https://doi.org/10.1109/JSTQE.2011.2147281 IJSQEN 1077-260X Google Scholar**

*Digital light-in-flight recording by holography by use of a femtosecond pulsed laser***,” Nat. Commun., 3 1111 (2012). https://doi.org/10.1038/ncomms2120 NCAOBW 2041-1723 Google Scholar**

*Single-shot ultrafast tomographic imaging by spectral multiplexing***,” Nature, 516 (7529), 74 –77 (2014). https://doi.org/10.1038/nature14005 Google Scholar**

*Single-shot compressed ultrafast photography at one hundred billion frames per second***,” Opt. Express, 24 (4), 4155 –4176 (2016). https://doi.org/10.1364/OE.24.004155 OPEXFF 1094-4087 Google Scholar**

*Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor***,” Proc. SPIE, 4985 14 –25 (2003). https://doi.org/10.1117/12.480761 PSISDG 0277-786X Google Scholar**

*Emerging digital micromirror device (DMD) applications***,” Opt. Eng., 50 (7), 072601 (2011). https://doi.org/10.1117/1.3596602 Google Scholar**

*Compressed sensing for practical optical imaging systems: a tutorial***,” IEEE Trans. Inf. Theory, 52 (2), 489 –509 (2006). https://doi.org/10.1109/TIT.2005.862083 IETTAW 0018-9448 Google Scholar**

*Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information***,” Commun. Pure Appl. Math., 59 (8), 1207 –1223 (2006). https://doi.org/10.1002/(ISSN)1097-0312 CPMAMV 0010-3640 Google Scholar**

*Stable signal recovery from incomplete and inaccurate measurements***,” IEEE Trans. Inf. Theory, 52 5406 –5425 (2006). https://doi.org/10.1109/TIT.2006.885507 IETTAW 0018-9448 Google Scholar**

*Near-optimal signal recovery from random projections: universal encoding strategies?***,” Proc. IEEE, 98 (6), 948 –958 (2010). https://doi.org/10.1109/JPROC.2010.2044010 IEEPAD 0018-9219 Google Scholar**

*Computational methods for sparse solution of linear inverse problems***,” Optica, 5 (2), 147 –151 (2018). https://doi.org/10.1364/OPTICA.5.000147 Google Scholar**

*Optimizing codes for compressed ultrafast photography by the genetic algorithm***,” IEEE Trans. Image Process., 16 (12), 2992 –3004 (2007). https://doi.org/10.1109/TIP.2007.909319 IIPRE4 1057-7149 Google Scholar**

*A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration***,” Laser Phys. Lett., 15 (11), 116202 (2018). https://doi.org/10.1088/1612-202X/aae198 1612-2011 Google Scholar**

*Compressed ultrafast photography by multi-encoding imaging***,” IEEE Trans. Signal Process., 55 (12), 5695 –5702 (2007). https://doi.org/10.1109/TSP.2007.900760 ITPRED 1053-587X Google Scholar**

*Optimized projections for compressed sensing***,” IEEE Trans. Image Process., 18 (7), 1395 –1408 (2009). https://doi.org/10.1109/TIP.2009.2022459 IIPRE4 1057-7149 Google Scholar**

*Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization***,” in Proc. 18th Eur. Signal Process. Conf. EUSIPCO’10, 427 –431 (2010). Google Scholar**

*On optimization of the measurement matrix for compressive sensing***,” Sci. Adv., 3 (1), e1601814 (2017). https://doi.org/10.1126/sciadv.1601814 STAMCV 1468-6996 Google Scholar**

*Single-shot real-time video recording of a photonic Mach cone induced by a scattered light pulse***,” Light Sci. Appl., 7 (1), 42 (2018). https://doi.org/10.1038/s41377-018-0044-7 Google Scholar**

*Single-shot real-time femtosecond imaging of temporal focusing***,” Sci. Rep., 5 15504 (2015). https://doi.org/10.1038/srep15504 SRCEC3 2045-2322 Google Scholar**

*Encrypted three-dimensional dynamic imaging using snapshot time-of-fight compressed ultrafast photography***,” J. Opt., 21 (3), 035703 (2019). https://doi.org/10.1088/2040-8986/ab00d9 Google Scholar**

*Improving the image reconstruction quality of compressed ultrafast photography via an augmented Lagrangian algorithm***,” Optica, 3 (7), 694 –697 (2016). https://doi.org/10.1364/OPTICA.3.000694 Google Scholar**

*Space- and intensity-constrained reconstruction for compressed ultrafast photography***,” J. Math. Imaging Vis., 20 (1–2), 89 –97 (2004). https://doi.org/10.1023/B:JMIV.0000011325.36760.1e Google Scholar**

*An algorithm for total variation minimization and applications***,” IEEE Trans. Image Process., 20 (3), 681 –695 (2011). https://doi.org/10.1109/TIP.2010.2076294 IIPRE4 1057-7149 Google Scholar**

*An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems***,” IEEE J. Sel. Top. Signal Process., 1 (4), 586 –597 (2008). https://doi.org/10.1109/JSTSP.2007.910281 Google Scholar**

*Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems***,” Science, 339 (6117), 310 –313 (2013). https://doi.org/10.1126/science.1230054 SCIEAS 0036-8075 Google Scholar**

*Metamaterial apertures for computational imaging***,” Phys. Rev. Lett., 122 (19), 193904 (2019). https://doi.org/10.1103/PhysRevLett.122.193904 PRLTAO 0031-9007 Google Scholar**

*Compressed ultrafast spectral–temporal photography***,” Phys. Rev. Appl., 10 (5), 054061 (2018). https://doi.org/10.1103/PhysRevApplied.10.054061 PRAHB2 2331-7019 Google Scholar**

*Compressed ultrafast electron diffraction imaging through electronic encoding***,” Micron, 117 47 –54 (2019). https://doi.org/10.1016/j.micron.2018.11.003 MICNB2 0047-7206 Google Scholar**

*Single-shot real-time sub-nanosecond electron imaging aided by compressed sensing: analytical modeling and simulation***,” Opt. Lett., 44 (6), 1387 –1390 (2019). https://doi.org/10.1364/OL.44.001387 OPLEDP 0146-9592 Google Scholar**

*Single-shot compressed optical-streaking ultra-high-speed photography***,” Nature, 499 (7458), 295 –300 (2013). https://doi.org/10.1038/nature12354 Google Scholar**

*Ultrasensitive fluorescent proteins for imaging neuronal activity***,” Nanophotonics, 5 (4), 497 –509 (2016). https://doi.org/10.1515/nanoph-2016-0026 Google Scholar**

*Ultrafast optical imaging technology: principles and applications of emerging methods***,” Nanoscale, 4 (15), 4301 –4326 (2012). https://doi.org/10.1039/c2nr30764b NANOHL 2040-3364 Google Scholar**

*Luminescence nanothermometry***,” IEEE Trans. Biomed. Eng., 36 (12), 1162 –1168 (1989). https://doi.org/10.1109/TBME.1989.1173624 IEBEAX 0018-9294 Google Scholar**

*Monte Carlo modeling of light propagation in highly scattering tissues—I. Model predictions and comparison with diffusion theory***,” J. Biomed. Opt., 18 (5), 050902 (2013). https://doi.org/10.1117/1.JBO.18.5.050902 JBOPFO 1083-3668 Google Scholar**

*Review of Monte Carlo modeling of light transport in tissues***,” J. Phys. Chem. A, 103 (49), 10260 –10267 (1999). https://doi.org/10.1021/jp9922007 JPCAFH 1089-5639 Google Scholar**

*Direct visualization of collective wavepacket dynamics***,” Opt. Express, 23 (6), 8073 –8086 (2015). https://doi.org/10.1364/OE.23.008073 OPEXFF 1094-4087 Google Scholar**

*Ultrafast imaging of terahertz Cherenkov waves and transition-like radiation in ${\mathrm{LiNbO}}_{3}$***,” Science, 254 (5035), 1178 –1181 (1991). https://doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar**

*Optical coherence tomography***,” J. Med. Eng. Technol., 14 (5), 178 –181 (1990). https://doi.org/10.3109/03091909009009955 JMTEDN 0309-1902 Google Scholar**

*A critical review of laser Doppler flowmetry***,” Rep. Prog. Phys., 73 (7), 076701 (2010). https://doi.org/10.1088/0034-4885/73/7/076701 RPPHAG 0034-4885 Google Scholar**

*Diffuse optics for tissue monitoring and tomography***,” J. Exp. Bot., 58 (4), 881 –898 (2007). https://doi.org/10.1093/jxb/erl142 JEBOA6 1460-2431 Google Scholar**

*3D lidar imaging for detecting and understanding plant responses and canopy structure***,” Sci. Rep., 3 2462 (2013). https://doi.org/10.1038/srep02462 SRCEC3 2045-2322 Google Scholar**

*Fast and high-accuracy localization for three-dimensional single-particle tracking***,” Opt. Eng., 53 (11), 112206 (2014). https://doi.org/10.1117/1.OE.53.11.112206 Google Scholar**

*Toward superfast three-dimensional optical metrology with digital micromirror device platforms***,” 114 –120 (2005). https://doi.org/10.1109/CVPR.2005.377 Google Scholar**

*3D assisted face recognition: a survey of 3D imaging, modelling and recognition approaches***,” 251 –256 (2002). https://doi.org/10.1109/ACV.2002.1182190 Google Scholar**

*Mosaic generation for under vehicle inspection***,” J. Electron. Imaging, 15 (3), 033008 (2006). https://doi.org/10.1117/1.2238565 JEIME5 1017-9909 Google Scholar**

*Robotic three-dimensional imaging system for under-vehicle inspection***,” Proc. SPIE, 4377 126 –131 (2001). https://doi.org/10.1117/12.440100 PSISDG 0277-786X Google Scholar**

*3D imaging for army applications***,” Adv. Opt. Photonics, 3 (2), 128 –160 (2011). https://doi.org/10.1364/AOP.3.000128 AOPAC7 1943-8206 Google Scholar**

*Structured-light 3D surface imaging: a tutorial***,” Appl. Opt., 45 (21), 5086 –5091 (2006). https://doi.org/10.1364/AO.45.005086 APOPAI 0003-6935 Google Scholar**

*Fast three-step phase-shifting algorithm***,” Appl. Opt., 36 (5), 1054 –1058 (1997). https://doi.org/10.1364/AO.36.001054 APOPAI 0003-6935 Google Scholar**

*Encrypted optical memory using double-random phase encoding***,” Nat. Commun., 3 (3), 745 (2012). https://doi.org/10.1038/ncomms1747 NCAOBW 2041-1723 Google Scholar**

*Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging***,” Nat. Commun., 6 6796 (2015). https://doi.org/10.1038/ncomms7796 NCAOBW 2041-1723 Google Scholar**

*Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion***,” Appl. Opt., 52 (4), 546 –560 (2013). https://doi.org/10.1364/AO.52.000546 APOPAI 0003-6935 Google Scholar**

*Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]***,” Science, 340 (6134), 844 –847 (2013). https://doi.org/10.1126/science.1234454 SCIEAS 0036-8075 Google Scholar**

*3D computational imaging with single-pixel detectors***,” 1930 –1935 (2010). https://doi.org/10.1109/ICINFA.2010.5512016 Google Scholar**

*A 3-D surveillance system using multiple integrated cameras***,” IEEE Micro, 34 (2), 44 –53 (2014). https://doi.org/10.1109/MM.2014.9 IEMIDZ 0272-1732 Google Scholar**

*The Xbox One system on a chip and kinect sensor***,” Nat. Photonics, 10 23 –26 (2015). https://doi.org/10.1038/nphoton.2015.234 NPAHBY 1749-4885 Google Scholar**

*Detection and tracking of moving objects hidden from view***,” Nature, 572 (7771), 620 –623 (2019). https://doi.org/10.1038/s41586-019-1461-3 Google Scholar**

*Non-line-of-sight imaging using phasor-field virtual wave optics***,” Opt. Express, 21 (7), 8904 –8915 (2013). https://doi.org/10.1364/OE.21.008904 OPEXFF 1094-4087 Google Scholar**

*Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection***,” J. Opt. Soc. Am. A, 23 (4), 800 –805 (2006). https://doi.org/10.1364/JOSAA.23.000800 JOAOD6 0740-3232 Google Scholar**

*Compact laser radar and three-dimensional camera***,” 35 –44 (2004). Google Scholar**

*A time-of-flight depth sensor: system description, issues and solutions***,” Proc. SPIE, 4298 48 –55 (2001). https://doi.org/10.1117/12.424913 PSISDG 0277-786X Google Scholar**

*Three-dimensional imaging in the studio and elsewhere***,” http://www.advancedscientificconcepts.com/products/Products.html Google Scholar**

*Products overview***,” Proc. SPIE, 4377 46 –56 (2001). https://doi.org/10.1117/12.440125 PSISDG 0277-786X Google Scholar**

*Eye-safe laser radar 3D imaging***,” Opt. Lett., 23 (10), 792 –794 (1998). https://doi.org/10.1364/OL.23.000792 OPLEDP 0146-9592 Google Scholar**

*Spectral phase interferometry for direct electric-field reconstruction of ultrashort optical pulses***,” Opt. Lett., 18 (10), 823 –825 (1993). https://doi.org/10.1364/OL.18.000823 OPLEDP 0146-9592 Google Scholar**

*Single-shot measurement of the intensity and phase of an arbitrary ultrashort pulses by using frequency-resolved optical gating***,” Opt. Lasers Eng., 116 89 –93 (2019). https://doi.org/10.1016/j.optlaseng.2019.01.002 Google Scholar**

*Single-shot spatiotemporal intensity measurement of picosecond laser pulses with compressed ultrafast photography***,” Opt. Express, 13 (6), 2153 –2159 (2005). https://doi.org/10.1364/OPEX.13.002153 OPEXFF 1094-4087 Google Scholar**

*Simultaneous spatial and temporal focusing of femtosecond pulses***,” Opt. Express, 13 (5), 1468 –1476 (2005). https://doi.org/10.1364/OPEX.13.001468 OPEXFF 1094-4087 Google Scholar**

*Scanningless depth-resolved microscopy***,” Opt. Eng., 32 (10), 2501 –2504 (1993). https://doi.org/10.1117/12.145393 Google Scholar**

*Femtosecond pulse front tilt caused by angular dispersion***,” Opt. Quantum Electron., 28 (12), 1759 –1763 (1996). https://doi.org/10.1007/BF00698541 OQELDI 0306-8919 Google Scholar**

*Derivation of the pulse front tilt caused by angular dispersion***,” Opt. Express, 15 (22), 14348 –14354 (2007). https://doi.org/10.1364/OE.15.014348 OPEXFF 1094-4087 Google Scholar**

*Moving picture recording and observation of three-dimensional image of femtosecond light pulse propagation***,” Nat. Commun., 5 3085 (2014). https://doi.org/10.1038/ncomms4085 NCAOBW 2041-1723 Google Scholar**

*Single-shot tomographic movies of evolving light-velocity objects***,” Nature, 458 (7242), 1145 –1149 (2009). https://doi.org/10.1038/nature07980 Google Scholar**

*Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena***,” Appl. Phys. Express, 10 (9), 092502 (2017). https://doi.org/10.7567/APEX.10.092502 APEPC4 1882-0778 Google Scholar**

*Single-shot 25-frame burst imaging of ultrafast phase transition of ${\mathrm{Ge}}_{2}{\mathrm{Sb}}_{2}{\mathrm{Te}}_{5}$ with a sub-picosecond resolution***,” Light Sci. Appl., 6 (9), e17045 (2017). https://doi.org/10.1038/lsa.2017.45 Google Scholar**

*FRAME: femtosecond videography for atomic and molecular dynamics***,” J. Opt. Soc. Am. B, 25 (6), A25 –A33 (2008). https://doi.org/10.1364/JOSAB.25.000A25 JOBPDE 0740-3224 Google Scholar**

*Single-frame measurement of the complete spatiotemporal intensity and phase of ultrashort laser pulses using wavelength-multiplexed digital holography***,” Phys. Rev. A, 65 (3), 032302 (2002). https://doi.org/10.1103/PhysRevA.65.032302 Google Scholar**

*Theoretically efficient high-capacity quantum-key-distribution scheme***,” Rev. Mod. Phys., 74 (1), 145 –195 (2002). https://doi.org/10.1103/RevModPhys.74.145 RMPHAT 0034-6861 Google Scholar**

*Quantum cryptography***,” Opt. Express, 16 (23), 19118 –19126 (2008). https://doi.org/10.1364/OE.16.019118 OPEXFF 1094-4087 Google Scholar**

*Long-distance entanglement-based quantum key distribution over optical fiber***,” FW2C.5 (2013). https://doi.org/10.1364/FIO.2013.FW2C.5 Google Scholar**

*Improved long-distance two-way continuous variable quantum key distribution over optical fiber***,” Post-Quantum Cryptography, 1 –14 Berlin (2009). Google Scholar**

*Introduction to post-quantum cryptography***,” Int. J. Comput. Sci. Issues, 7 (5), 148 –153 (2010). Google Scholar**

*A three-party authentication for key distributed protocol using classical and quantum cryptography***,” Sci. China Phys. Mech. Astron., 58 (2), 1 –7 (2015). https://doi.org/10.1007/s11433-014-5632-9 SCPMCL 1674-7348 Google Scholar**

*Hybrid quantum private communication with continuous-variable and discrete-variable signals***,” Proc. SPIE, 9123 912307 (2014). https://doi.org/10.1117/12.2050095 PSISDG 0277-786X Google Scholar**

*Adaptive multicarrier quadrature division modulation for long-distance continuous-variable quantum key distribution***,” Appl. Phys. Lett., 104 (2), 021101 (2014). https://doi.org/10.1063/1.4855515 APPLAB 0003-6951 Google Scholar**

*Room temperature single-photon detectors for high bit rate quantum key distribution***,” Adv. Quantum Technol., 1 1800034 (2018). https://doi.org/10.1002/qute.v1.2 Google Scholar**

*Compressed 3D image information and communication security***,” IEEE Trans. Pattern Anal. Mach. Intel., 38 295 –307 (2015). https://doi.org/10.1109/TPAMI.2015.2439281 ITPIDJ 0162-8828 Google Scholar**

*Image super-resolution using deep convolutional networks***,” Nat. Methods, 16 103 –110 (2019). https://doi.org/10.1038/s41592-018-0239-0 1548-7091 Google Scholar**

*Deep learning achieves super-resolution in fluorescence microscopy***,” Optica, 4 1117 –1125 (2017). https://doi.org/10.1364/OPTICA.4.001117 Google Scholar**

*Lensless computational imaging through deep learning***,” Adv. Photonics, 1 (3), 036002 (2019). https://doi.org/10.1117/1.AP.1.3.036002 AOPAC7 1943-8206 Google Scholar**

*Learning-based lensless imaging through optically thick scattering media***,” Sci. Rep., 7 17865 (2017). https://doi.org/10.1038/s41598-017-18171-7 SRCEC3 2045-2322 Google Scholar**

*Deep-learning-based ghost imaging*## Biography

**Dalong Qi** received his PhD from East China Normal University, China, in 2017, with a period at the Max-Planck Institute for the Structure and Dynamics of Matter, Germany. He was a postdoctoral fellow at East China Normal University until 2019, and has been an associate professor there since then. His research interests include ultrafast optical imaging, computational imaging, and ultrafast electron diffraction.

**Shian Zhang** received his PhD from East China Normal University, China, in 2006. He was a senior engineer at Spectra-Physics, Inc., and a postdoctoral fellow at Arizona State University until 2009. He has been a professor at East China Normal University since 2012, with a period at Washington University in St. Louis as visiting scholar. His research interests include ultrafast optical imaging, computational imaging, and nonlinear optical microscopy.

**Zhenrong Sun** received his PhD from East China Normal University in 1998. He has been a professor at East China Normal University since 2001. His research interests include ultrafast optical imaging, femtosecond quantum control, femtosecond electron diffraction, and ultrafast laser spectroscopy.

**Lihong V. Wang** is the Bren Professor of Medical and Electrical Engineering at Caltech. He has published 530 journal articles (h-index: 134, citations: 74,000) and delivered 535 keynote/plenary/invited talks. He published the first functional photoacoustic CT and 3-D photoacoustic microscopy. He has received the Goodman Book Award, NIH Director’s Pioneer Award, OSA Mees Medal and Feld Award, IEEE Technical Achievement and Biomedical Engineering Awards, SPIE Chance Award, IPPA Senior Prize, and an honorary doctorate from Lund University, Sweden. He is a member of the National Academy of Engineering.