Translator Disclaimer
28 February 2020 Single-shot compressed ultrafast photography: a review
Author Affiliations +
Abstract

Compressed ultrafast photography (CUP) is a burgeoning single-shot computational imaging technique that provides an imaging speed as high as 10 trillion frames per second and a sequence depth of up to a few hundred frames. This technique synergizes compressed sensing and the streak camera technique to capture nonrepeatable ultrafast transient events with a single shot. With recent unprecedented technical developments and extensions of this methodology, it has been widely used in ultrafast optical imaging and metrology, ultrafast electron diffraction and microscopy, and information security protection. We review the basic principles of CUP, its recent advances in data acquisition and image reconstruction, its fusions with other modalities, and its unique applications in multiple research fields.

1.

Introduction

Researchers and photographers have long sought to unravel transient events on an ultrashort time scale using ultrafast imaging. From the early observations of a galloping horse1 to capturing the electronic motions in nonequilibrium materials,2 this research area has continuously developed for over 140 years. Currently, with the aid of subfemtosecond (fs, 1015  s) lasers3,4 and highly coherent electron sources,5,6 it is possible to simultaneously achieve attosecond (as, 1018  s) temporal resolution and subnanometer (nm, 109  m) spatial resolution.7,8 Ultrafast imaging holds great promise for advancing science and technology, and it has already been widely used in both scientific research and industrial applications.

Ultrafast imaging approaches can be classified into stroboscopic and single-shot categories. For transient events that are highly repeatable, reliable pump-probe schemes are used to explore the underlying mechanisms. Unfortunately, this strategy becomes ineffective in circumstances with unstable and even irreversible dynamics, such as optical rogue waves,9 irreversible structural dynamics in chemical reactions,10,11 and shock waves in inertial confinement fusion.12 To overcome this technical limitation, a variety of single-shot ultrafast imaging techniques with the ability to visualize the evolution of two-dimensional (2-D) spatial information have been proposed.13 Based on their methods of image formation, these imaging techniques can be further divided into two categories. One is the direct imaging without the aid of computational processing, such as ultrafast framing/sampling cameras,14 femtosecond time-resolved optical polarimetry,15 and sequentially timed all-optical mapping photography (STAMP).16 The other category is reconstruction imaging, in which dynamic scenes are extracted or recovered from the detected results by specific computational imaging algorithms, including holography,17 tomography,18 and compressed sensing (CS)-based photography.19,20 As summarized in Ref. 13, although direct imaging methods are still important and reliable for capturing transient events in real time, an increasing number of reconstruction imaging approaches have achieved substantial progress in various specifications, such as imaging speed, number of pixels per frame, and sequence depth (i.e., frames per shot).

Among the various reconstruction imaging modalities, compressed ultrafast photography (CUP) advantageously combines the super-high compression ratio of sparse data achieved by applying CS and the ultrashort temporal resolution of streak camera techniques. CUP has achieved a world record imaging speed of 10 trillion frames per second (Tfps), as well as a sequence depth of hundreds of frames simultaneously with only one shot.19 Moreover, a series of ultrafast diffractive and microscopic imaging schemes with electron and x-ray sources have been proposed to extend the modality from optics to other domains. In recent years, CUP has emerged as a promising candidate for driving next-generation single-shot ultrafast imaging.

Covering recent research outcomes in CUP and its related applications since its first appearance in 2014,19 this review introduces and discusses state-of-the-art imaging techniques, including their principles and applications. The subsequent sections are arranged as follows. In Sec. 2, we describe the working principle of CUP and discuss mathematical models of the data acquisition and the image reconstruction processes. In Secs. 3 and 4, we review technical improvements and extensions of this technique so far, respectively. The technical improvements are explained with regard to the data acquisition and the image reconstruction, whereas the technical extensions of CUP combine a variety of techniques. In Sec. 5, related applications of CUP are described and discussed, not only for optical measurements but also for information security protection. Finally, we conclude this review in Sec. 6 and speculate on future research directions.

2.

Working Principle of CUP

A CUP experiment can be completed in two steps: data acquisition and image reconstruction. A simple experimental diagram for data acquisition is shown in Fig. 1.19 A dynamic scene is first imaged on a digital micromirror device (DMD) by a camera lens and a 4f imaging system consisting of a tube lens and a microscope objective, and then it is encoded in the spatial domain by the DMD. Subsequently, the encoded dynamic scene reflected from the DMD is collected by the same 4f imaging system. Finally, it is deflected and measured by a streak camera.

Fig. 1

CUP system configuration. CCD, charge-coupled device; DMD, digital micromirror device; V, sweeping voltage; t, time; x and y, spatial coordinates of the dynamic scene; x and y, spatial coordinates of the streak camera. Since each micromirror (72  mm×72  mm) of the DMD is much larger than the light wavelength, the diffraction angle is small (4  deg). With a collecting objective of numerical aperture NA=0.16, the throughput loss caused by the DMD’s diffraction is negligible. Equipment details: camera lens, Fujinon CF75HA-1; DMD, Texas Instruments DLP LightCrafter; microscope objective, Olympus UPLSAPO 4X; tube lens, Thorlabs AC254-150-A; streak camera, Hamamatsu C7700; CCD, Hamamatsu ORCA-R2. Figure reprinted from Ref. 19.

AP_2_1_014003_f001.png

A DMD consists of hundreds of thousands of micromirrors; each of which can be individually rotated with ±12  deg that represents an on or off state.21 When a pseudorandom binary code is loaded onto a DMD, these micromirrors can be turned on or off accordingly, thus a dynamic scene projected on the DMD can be spatially encoded. In CUP acquisition, the entrance slit of the streak camera is fully opened (5  mm), and a scanning control module in the streak camera provides a sweeping voltage that linearly deflects the photoelectrons induced by the dynamic scene according to their arrival times. The temporally sheared image operated by the streak camera is captured by a CCD in a single exposure. In the CUP experiment, the pseudorandom binary code generated by the DMD is fixed. Consequently, data acquisition is divided into three steps, which can be described by a forward model. As shown in Fig. 2(a), this procedure can be mathematically described as follows: the three-dimensional (3-D) dynamic scene I(x,y,t) is first imaged onto an intermediate plane, on which the intensity distribution of the intermediate image is identical to that of the original scene under the assumption of ideal optical imaging with unit magnification. The intermediate image is successively processed by a mask containing pseudorandomly distributed, squared, and binary-valued elements at the intermediate image plane. The image intensity distribution after this operation is formulated as

Eq. (1)

Ic(x,y,t)=i,jI(x,y,t)Ci,jrect[xd(i+12),yd(j+12)].
Here, Ci,j is an element of the matrix representing the coded mask, i, j are the matrix element indices, and d is the mask pixel size that is equivalent to a binned DMD or CCD pixel. For both dimensions, the rectangular function (rect) is defined as
rect(x)={1,if  |x|120,else.
Then, the encoded dynamic scene is sheared by the streak camera in the time domain by applying a voltage ramp along the vertical y axis and can be expressed as

Eq. (2)

Is(x,y,t)=Ic(x,yvt,t),
where v is the shearing velocity of the streak camera. Finally, the scene is spatially and temporally integrated over each camera pixel and the exposure time, respectively, and forms a 2-D image on the detector. Thus, the optical energy E(x,y) of the integrated 2-D image is

Eq. (3)

E(x,y)=dtdxdyIs(x,y,t)rect[xd(x+12),yd(y+12)].
Accordingly, voxelization of I(x,y,t) into Ii,j,τ can be expressed as

Eq. (4)

I(x,y,t)i,j,τIi,j,τrect[xd(i+12),yd(j+12),tΔt(τ+12)],
where Δt=d/v. Given the prerequisite of perfect registration of the mask elements and the camera pixels, a voxelized expression of E(x,y) can be yielded by combining Eqs. (1)–(4) as follows:

Eq. (5)

E(x,y)=d3vτ=0y1Cx,yτIx,yτ,τ,
where Cx,yτIx,yτ,τ indicates a coded, sheared scene. In general, a spatial encoding operator C, a temporal shearing operator S, and a spatiotemporal integration operator T are introduced and form a 2-D image E(x,y), which is expressed as

Eq. (6)

E(x,y)=TSCI(x,y,t)=OI(x,y,t).
Here, for simplicity, a notation is introduced O=TSC.

Fig. 2

Data flow of CUP in (a) data acquisition and (b) image reconstruction.

AP_2_1_014003_f002.png

It is noteworthy that, given a coded mask with dimensions of Nx×Ny, the input scene I(x,y,t) can be voxelized into a matrix form with dimensions Nx×Ny×Nt under the assumption of ideal optical imaging with unit magnification, where Nx, Ny, and Nt are the numbers of voxels along x, y, and t, respectively. Therefore, the measured E(x,y) has dimensions Nx×(Ny+Nt1), and the spatial resolution of CUP is mainly determined by Nx and Ny or the mask pixel size, d, while the temporal resolution is restricted by Nt, which is related to the shearing velocity of streak camera, v.

Given the prior knowledge of the forward model, the image reconstruction tries to estimate the unknown dynamic scene I(x,y,t) from the captured 2-D image E(x,y) by solving the linear inverse problem of Eq. (6). The number of elements in the 3-D dynamic scene I(x,y,t) is approximately two orders of magnitude larger than that in the 2-D image E(x,y).19 Therefore, the inverse problem of Eq. (6) is underdetermined, which suffers from great uncertainty to reconstruct the real result of I(x,y,t) from E(x,y) by a traditional approach based on the parameters T, S, and C. CUP introduces CS theory to solve this problem.22,23 Here, CS makes full use of the sparsity of I(x,y,t) in a certain domain to recover the original scene. Sparsity in a certain domain means that most elements are zeros, whereas only a few elements are nonzeros. Consider the case where I(x,y,t) has n elements in the original domain and s nonzero elements in the sparse domain, whereas E(x,y) has m elements, where ns and n>m>s. The fact that m is larger than s makes it possible to solve the inverse problem of Eq. (6). To practically solve this problem, CUP finds the best I(x,y,t) using a CS algorithm in a certain sparse domain under the condition of Eq. (6), which is shown as

Eq. (7)

minIΦ[I(x,y,t)]subject to  E(x,y)=OI(x,y,t),
where Φ[I(x,y,t)] is the expression of I(x,y,t) in a sparse domain. Usually, the domain can be in various forms to achieve a sparsity as high as possible, including the space, the time, or both. Based on the theory of stable signal recovery from incomplete and inaccurate measurements,2325 the original I(x,y,t) can be completely recovered when

Eq. (8)

m>f·s·μ2,
where f is a constant that is correlated with the number of elements n, and μ is the mutual coherence between the sparse basis of I(x,y,t) and the measurement matrix that is dependent on the operators C, S, and T. To recover I(x,y,t), CUP first sets its initial guess as a point in an n-dimensional space, denoted as I0. Starting from the initial point I0, the CS algorithm can search for the destination point IL, and this optimization process can be described as follows: the intermediate point Ii is updated in each iteration until Ii reaches the proximity of IL, as shown in Fig. 2(b). In addition, the search paths should obey Eq. (7), but they will be different in different CS algorithms. For these search paths, there exist at least five major classes of computational techniques: greedy pursuit, convex relaxation, Bayesian framework, nonconvex optimization, and brute force. The details can be found in Ref. 26. For different CS algorithms, the final IL points are different, which indicates that an optimal CS algorithm exists. The difference between IL and the original I can be utilized as the standard for judging the algorithm’s quality. Last but not least, noise always exists in experimental data. Moreover, Eq. (8) is often unsatisfied due to the large compression ratio caused by transforming the 3-D data cube I(x,y,t) into the 2-D data E(x,y), as shown in Fig. 2(a). Therefore, Eq. (7) can be further written as

Eq. (9)

minIΦ[I(x,y,t)]subject to  E(x,y)OI(x,y,t)2<δ,
where .2 denotes the l2 norm and δ is the value of the error.

Based on the data acquisition and image reconstruction procedures with a single-shot operation, the CUP system with the configuration shown in Fig. 1 can achieve an imaging speed as high as 100 billion fps and a sequence depth of 350. For each frame, the spatial resolution is 0.4 line pairs per mm in a 50-mm×50-mm field of view (FOV). CUP has itself empowered outstanding performance as a single-shot ultrafast imaging technique, but many technical improvements have emerged.

3.

Technical Improvements in CUP

In this section, we review recent technical improvements in the CUP technique from two aspects of CUP’s experimental operation. In Sec. 3.1, we discuss a few strategies for data acquisition inspired by Eq. (8), as well as the fastest CUP system with a streak camera, which has femtosecond temporal resolutions. In Sec. 3.2, we review improvements in image reconstruction algorithms.

3.1.

Improvements in Data Acquisition

Equation (8) holds the key to improving data acquisition. For a given dynamic scene I(x,y,t), the coefficient of nonzero elements in the original domain, f, and the number of nonzero elements in the sparse domain, s, are constants. Fortunately, improvements can be realized by reducing the mutual coherence, μ, or increasing the measured number of elements, m. Based on this principle, a few novel approaches have been proposed.

3.1.1.

Reducing the mutual coherence

The parameter μ represents the mutual coherence between the sparse basis of I(x,y,t) and the measurement matrix. The measurement matrix mainly depends on the encoding operator C (i.e., the random codes on the DMD), which indicates that CUP performance can be improved by optimizing the random codes. Yang et al. adopted a genetic algorithm (GA) to optimize the codes.27 The GA is designed to self-adaptively find the optimal codes in the search space and eventually obtain the global solution. Utilizing the optimized codes, CUP needs three steps to recover a dynamic scene, as shown in Fig. 3(a). First, a dynamic scene is set as the optimization target. This scene can be different from the real dynamic scene but must maintain the same sparse basis. This optimization target scene consists of the images reconstructed by CUP using the random codes. Second, the GA is utilized to optimize the codes according to the optimization target, which represents the reconstructed images in step I. These reconstructed images constitute a simulated scene, which is then recovered by computer simulation with many sets of random codes. Each set of random codes is regarded as an individual, and these individuals constitute a group. Here, the GA simulates biological evolution to find the optimal codes. The details can be found in Ref. 27. Finally, using the optimal codes obtained in step II, CUP records the dynamic scene for a second time.

Fig. 3

(a) Flow chart for optimizing the encoding mask in a CUP system based on GA. (b), (c) The experimental results for a spatially modulated picosecond laser pulse evolution obtained by (b) optimal codes and (c) random codes. TwIST: two-step iterative shrinkage/thresholding.28 Figures reprinted from Ref. 29.

AP_2_1_014003_f003.png

Figures 3(b) and 3(c) show reconstructed results using the optimal codes and random codes, respectively. Here, the dynamic scene is a time- and space-evolving laser pulse with a pulse width of 3 ns and a central wavelength of 532 nm, and the laser spot is divided into two components in space by a thin wire. The result obtained by the optimal codes has less noise and is more distinct in the spatial profile than that obtained by the random codes. However, a total of three steps have been performed for one optimization, so this method demands that the dynamic scene be repeatable twice. For some nonrepeatable scenes, a similar dynamic scene should be found in advance, and it should have the same sparse basis as the real dynamic scene.3032 One point to note is that the decrease in μ realized by optimizing the codes using the GA is somewhat limited, because it can only optimize operator C; it is invalid to other operators that constitute the measurement matrix.

3.1.2.

Increasing the number of elements, m

As shown in Eq. (8), the parameter m represents the number of elements in E(x,y). For a given dynamic scene, m is a constant if the scene is encoded by a single set of random codes, as is shown in Fig. 2(a). To increase m, more sets of random codes can be utilized to simultaneously encode the dynamic scene: this method is called multiencoding CUP.29 In this method, as shown in Fig. 4, an ultrafast dynamic scene is divided into several replicas, and each replica is encoded by an independent encoding mask. Finally, these replicas are individually imaged after temporal shearing. Thus, Eq. (5) can be further formulated in matrix form as

Eq. (10)

[E1(x,y)E2(x,y)Ek(x,y)]=[TSC1TSC2TSCk]I(x,y,t),
where k is the number of encoding masks, Ek(x,y) is the k’th measured 2-D image, and Ck, S, and T denote the k’th spatial encoding operator, the temporal shearing operator, and the spatiotemporal integration operator, respectively. In this condition, m is increased k times, whereas the mutual coherence, μ, is decreased to a limited extent by optimizing the codes. Compared with schemes for decreasing μ, increasing m is a more effective method.

Fig. 4

A schematic diagram of data acquisition in multiencoding CUP. Here, t and t are time; Ck is the spatial encoding operator; S is the temporal shearing operator; T is the spatiotemporal integration operator. Figure reprinted from Ref. 29.

AP_2_1_014003_f004.png

Coincidently, the lossless encoding CUP (LLE-CUP) proposed by Liang et al.33 can also be regarded as a method to increase m. There are three views in LLE-CUP: the dynamic scene in two of the views is encoded by complementary codes and is then sheared and integrated by the streak camera, and the results are called the sheared views. The dynamic scene in the third view is just integrated by an external CCD, and the result is called the unsheared view. Mathematically, only the sheared views have an effect on extracting the 3-D datacube from the compressed 2-D image, whereas the unsheared view is used to restrict the space and intensity of the reconstructed image. In this method, the sheared views provide different codes for each acquisition channel, and each image is reconstructed by its own codes. Nevertheless, the unsheared view still improves the image reconstruction quality in some situations, and it is adopted in a few approaches.34,35

It is worth mentioning that both schemes can effectively improve CUP’s performance by increasing the sampling rate. Moreover, as demonstrated in Ref. 29, a multiencoding strategy can break through the original temporal resolution limitation of the temporal deflector (e.g., the streak camera), which was formerly considered a restriction on the frame rate of a CUP system. The reconstructed image quality is significantly improved by increasing m, but the spatial resolution may decrease when the CCD is divided into several subareas to image the dynamic scenes in these channels. However, if the channels are arranged sophisticatedly or the FOV can be sacrificed, the spatial resolution can reach a balanced value. Moreover, the synchronization between different channels is very crucial for realizing the achievable temporal resolution.

3.1.3.

Fastest CUP system

One of the most important characteristics of a CUP system is its ultrafast imaging speed, which is definitively determined by the temporal deflector. Based on a prototype of CUP, Liang et al. recently established a trillion-frame-per-second compressed ultrafast photography (T-CUP) system and realized real-time, ultrafast, passive imaging of temporal focusing with 100-fs frame intervals in a single camera exposure.34 A diagram of the T-CUP system is shown in Fig. 5. Similar to the first-generation CUP system (Fig. 1), it performs data acquisition and image reconstruction, but an external CCD is installed on the other side of the beam splitter. A 3-D spatiotemporal scene is first imaged by the beam splitter to form two replicas. The first replica is directly recorded by the external CCD by temporally integrating it over the entire exposure time. In addition, the other replica is spatially encoded by a DMD, and then sent to a femtosecond streak camera with the highest temporal resolution of 200 fs, where the entrance slit is fully opened, and the scene is sheared along one spatial axis and recorded by a detector. With the aid of reconstruction algorithms to solve the minimization problem, one can obtain a time-lapse video of the dynamic scene with a frame rate as high as 10 Tfps. In a CUP system, the imaging speed mainly depends on the temporal resolution of the streak camera. Therefore, by varying the temporal shearing velocity of the streak camera, the frame rate can be widely varied from 0.5 to 10 Tfps, with corresponding T-CUP temporal resolutions from 6.34 to 0.58 ps.

Fig. 5

Schematic diagram of the T-CUP system. Inset (black dashed box): detailed illustration of the streak tube. MCP, microchannel plate. Figure reprinted from Ref. 34.

AP_2_1_014003_f005.png

Foremost, it is noteworthy that all reconstructed scenes using the T-CUP system are accurate to 100 fs in frame interval, with a sequence depth (i.e., number of frames per exposure) of more than 300. To the best of our knowledge, this is the world’s best combination of imaging speed and sequence depth. On the other hand, it should be noted that a streak camera needs photon-to-electron and electron-to-photon conversion for 2-D imaging, and this limitation confines the pixel count of each reconstructed image to tens of thousands.

3.2.

Improvements in Image Reconstruction

Because CS theory is key to CUP, efforts to improve the reconstruction algorithms for performance improvement have been important. One example is optimizing the search path to seek a better CS algorithm, which has resulted in the proposed use of the augmented Lagrangian (AL) algorithm.36 In addition, an alternative scheme is confining the search path within a certain scope, which is called the space- and intensity-constrained (SIC) reconstruction method.37

3.2.1.

AL-based reconstruction algorithm

Heretofore, all CS algorithms for CUP were based on total variation (TV) minimization, which is a convex relaxation technique. TV minimization can make the recovered image quality sharp by preserving boundaries more accurately,38,39 which is essential to characterize the reconstructed images. The original tool for image reconstruction in CUP was a two-step iterative shrinkage/thresholding (TwIST) algorithm.28 The TwIST algorithm is a quadratic penalty function method and can transform Eq. (9) into

Eq. (11)

minI{Φ[I(x,y,t)]TV+β2E(x,y)OI(x,y,t)22},
where Φ[I(x,y,t)]TV is the TV regularizer and β is the penalty parameter, with β>0. Alternatively, Eq. (9) can also be written in Lagrangian function form and given as

Eq. (12)

minI{Φ[I(x,y,t)]TVλ[E(x,y)OI(x,y,t)]},
where λ is the Lagrange multiplier vector and E(x,y)OI(x,y,t) is written as a vector. When calculating the derivatives of I(x,y,t) in Eqs. (11) and (12), this relationship can be expressed as

Eq. (13)

E(x,y)OI(x,y,t)=1βλ.
Here, the value of E(x,y)OI(x,y,t) is used to illustrate the feasibility of the image reconstruction, and a smaller value corresponds to greater feasibility. It is easy to see from Eq. (13) that increasing β can improve the feasibility but will cause ill conditions for the quadratic penalty function.40 To avoid this problem, an AL function method has been proposed,39 presented as

Eq. (14)

minI{Φ[I(x,y,t)]γ[E(x,y)OI(x,y,t)]+β2E(x,y)OI(x,y,t)22},
where γ is a variable Lagrange multiplier vector. By following the transformation from Eqs. (11)–(13), it is easy to obtain

Eq. (15)

E(x,y)OI(x,y,t)=1β(λγ).
Compared with Eq. (13), the value of E(x,y)OI(x,y,t) in Eq. (15) depends on both β and γ. By optimizing γ through iteration, it is easy to find the minimum. Therefore, compared to the TwIST algorithm, the AL algorithm can provide higher image reconstruction quality.

To further validate the improvement in the image reconstruction quality, a superluminal propagation of noninformation was recorded in Ref. 36. As shown in Fig. 6(a), a femtosecond laser pulse obliquely illuminates a transverse stripe pattern at an angle of 38  deg with respect to the surface normal, and a CUP camera is vertically positioned for recording. Figures 6(b) and 6(c) show the experimental results reconstructed by the AL algorithm and the TwIST algorithm, respectively. Clearly, the images reconstructed by the TwIST algorithm have more artifacts, whereas those reconstructed by the AL algorithm are more faithful to the true situation. The AL algorithm opens up new approaches to solving this inverse problem, such as gradient projection for sparse reconstruction.41 In the near future, more studies will surely be carried out to further optimize image reconstruction algorithms.

Fig. 6

(a) An experimental diagram of imaging a superluminal propagation, showing experimental results obtained by the (b) AL algorithm and (c) TwIST algorithm. Figures reprinted from Ref. 36.

AP_2_1_014003_f006.png

3.2.2.

Space- and intensity-constrained reconstruction

A scheme proposed by Zhu et al. confines the search path within certain scopes and is accordingly named the SIC reconstruction algorithm.37 This method operates in a spatial zone M, and the values of pixels outside of this region are set as zeros. The spatial zone M is extracted from the unsheared spatiotemporally integrated image of the dynamic scene, recorded by an external CCD, which is similar to the hardware configuration in Sec. 3.1.3. In addition, the values of pixels less than the intensity threshold s, even in zone M, are set to zero. By using the penalty function framework, the SIC reconstruction algorithm can be written as

Eq. (16)

minIM,I>s{Φ[I(x,y,t)]TV+β2E(x,y)OI(x,y,t)22}.
Here, zone M is chosen by an adaptive local thresholding algorithm and a median filter. The threshold is chosen from a couple of candidates between 0 and 0.01 times of the maximal pixel value. The criterion for these values is the minimal root-mean-square error between the reconstructed integrated images obtained by the algorithm and the unsheared integrated image obtained by the external CCD.

To demonstrate the advantages of the SIC reconstruction method, a picosecond laser pulse propagation was captured by a derivative of the primary CUP system, and the reconstructed results by the unconstrained (i.e., TwIST) and constrained (i.e., SIC) algorithms are shown in Figs. 7(a) and 7(b), respectively. Clearly, the SIC reconstructed image maintains sharper boundaries than the TwIST reconstructed image. Moreover, the normalized intensity profiles in Fig. 7(c) further show that the spatial and temporal resolutions are simultaneously improved using the SIC algorithm.

Fig. 7

The experimental results of imaging a picosecond laser pulse propagation. The frame at t=170  ps is shown as reconstructed by the (a) TwIST and (b) SIC algorithms; (c) image profiles along the blue lines indicated in (a) and (b). Figures reprinted from Ref. 37.

AP_2_1_014003_f007.png

4.

Technical Extensions of CUP

In mathematical models of CUP, CS offers a scheme that allows the underdetermined reconstruction of sparse scenes.42 Since CUP uses a linear and undersampled imaging system, such a model can be flexibly extended to other systems to address their limitations. Three representative works in recent years are presented here to inspire researchers. The first extension, described in Sec. 4.1, originates from the combination of CUP and STAMP to realize ultrafast spectral–temporal photography based on CS. Next, similar to its usage in the CUP system, CS is introduced into microscopic systems based on electron sources to explore ultrafast structural dynamics in a single shot, as reviewed in Sec. 4.2. Finally, a novel all-optical ultrahigh-speed imaging strategy that does not employ a streak camera is described and discussed in Sec. 4.3.

4.1.

Compressed Ultrafast Spectral–Temporal Photography

As introduced in Sec. 1, both direct and computational imaging techniques have achieved remarkable progress in recent years, but they have seemed to develop independently and without intersection. In the early 2019, Lu et al. creatively proposed a new compressed ultrafast spectral–temporal (CUST) photography system43 by merging the modalities of CUP and STAMP. Combining the advantages of these two ultrafast imaging systems, the CUST system, shown schematically in Fig. 8, provides both an ultrahigh frame rate of 3.85 Tfps and a large number of frames. The CUST system consists of three modules: a spectral-shaping module (SSM), a pulse-stretching module (PSM), and a so-called “compressed camera.” In the SSM, a femtosecond laser pulse passes through a pair of gratings and a pulse shaping system with a 4f configuration. On the Fourier plane of the 4f system, a slit is positioned to select a designated spectrum of the femtosecond pulse. In the PSM, the femtosecond pulse is stretched by another pair of gratings to generate a stretched picosecond-chirped pulse as illumination. The “compressed camera” is similar to that in the CUP system, but the main difference is that the streak camera is replaced by a grating to disperse the spatially encoded event at different wavelengths. Because the illumination pulse is chirped linearly, it ensures a one-to-one linear relationship between the temporal and wavelength information. Finally, a CS algorithm is employed to reconstruct the dynamic scene, much as in CUP. By recording ultrafast spectrally resolved images of an object, the CUST system can acquire 60 spectral images with a 0.25-nm spectral resolution on approximately a picosecond timescale.

Fig. 8

A schematic diagram of the CUST technique. Figure reprinted with permission from Ref. 43.

AP_2_1_014003_f008.png

The temporal solution of the CUST technique mainly depends on the chirp ability of the pulse-shaping system and the spectral resolution of the compressed camera; therefore, the imaging speed can be flexibly adjusted by tuning the grating components. In comparison to STAMP, CUST offers more frames. However, since the CUST system uses a chirped pulse as illumination, it cannot measure a self-emitting event, such as fluorescence, or the color of the object.

4.2.

Compressed Ultrafast Electron Diffraction and Microscopy

Understanding the origins of many ultrafast microscopic phenomena requires probing technologies that provide both high spatial and temporal resolution simultaneously. Based on the inherent limitations of the elementary particles in the imaging processes, photons and electrons have accounted for the most powerful imaging tools, but the two are dissimilar in terms of the spatial and temporal domains they can access. Photons can be used for extremely high (up to attosecond) temporal studies, whereas accelerated electrons excel in forming images with the highest spatial resolution (sub-angstrom) achieved so far. In recent decades, many researchers have focused on merging conventional electron diffraction and microscopy systems with ultrafast lasers, and a variety of structurally related dynamics have been explored. Unfortunately, these systems still suffer from the limitations of multiple-shot measurements and synchronization-induced timing jitter.

To overcome the limitations in this research field, solutions have been proposed based on the methodology of CUP. Qi et al. proposed a new theoretical design, named compressed ultrafast electron diffraction imaging (CUEDI),44 which, for the first time, subtly combines an ultrafast electron diffraction (UED) system with the CUP modality. As shown in Fig. 9(a), by utilizing a long-pulsed laser to generate the probe electron source and inserting an electron encoder between the sample and the streak electric field, CUEDI completes the measurement in a single-shot, which eliminates the relative time jitter between the pump and probe beams. In addition, Liu et al. in 2019 added CS to a laser-assisted transmission electron microscopy (TEM) setup to create two related novel schemes, named single-shearing compressed ultrafast TEM (CUTEM) and dual-shearing CUTEM (DS-CUTEM),45 which are shown in Figs. 9(b) and 9(c), respectively. In each scheme, the projected transient scene experiences encoding and shearing before reaching the detector array. However, an additional pair of shearing electrodes is inserted before the encoding mask in the DS-CUTEM scheme, which is used to shear the dynamic scene in advance, and thus a more incoherent measurement matrix is generated by the encoding mask. Therefore, the mutual coherence of the scene in DS-CUTEM is even smaller than that in CUTEM. Based on these analytical models and simulated results, single-shot ultrafast electronic microscopy with subnanosecond temporal resolution could be realized by integrating CS-aided ultrafast imaging modalities with laser-assisted TEM.

Fig. 9

The theoretical designs of (a) CUEDI, (b) CUTEM, and (c) dual-shearing CUTEM. Figures reprinted from Refs. 44 and 45.

AP_2_1_014003_f009.png

4.3.

Compressed Optical-Streaking Ultrahigh-Speed Photography

Because a streak camera is used in previous CUP systems, photon–electron–photon conversion cannot be avoided, thus deteriorating the reconstructed image quality in each frame. To overcome this limitation, Liu et al. in 2019 developed single-shot compressed optical-streaking ultrahigh-speed photography (COSUP),46 which is a passive-detection computational imaging modality with a 2-D imaging speed of 1.5 million fps (Mfps), a sequence depth of 500, and a pixel count of 1000×500 per frame. In the COSUP system, the temporal shearing device is a galvanometer scanner (GS), not a streak camera. As shown in Fig. 10, the GS is placed at the Fourier plane of the 4f system, and, according to the arrival time, it temporally shears the spatially encoded frames linearly to different spatial locations along the x-axis of the camera. Moreover, COSUP and CUP share the same mathematical model.

Fig. 10

The experimental setup of COSUP. Figure reprinted from Ref. 46.

AP_2_1_014003_f010.png

Compared with CUP, the temporal resolution of the COSUP system is much lower since it is currently limited by the linear rotation voltage of the GS. However, because COSUP avoids the electronic process in a streak camera, its spatial resolution is over 20 times higher. Importantly, the ingenious design of optical streaking provides a new approach for improving the spatial resolution of CUP-like systems, for example, with optical Kerr effect gates and Pockels effect gates. Moreover, because of the simplification of components in COSUP, it provides a cost-effective alternative way to perform ultrahigh-speed imaging. For the future, a slower COSUP system combined with a microscope holds great potential to enable such bioimaging feats as high-sensitivity optical neuroimaging of action potential propagation and using nanoparticles for wide-field temperature sensing in tissue.4749

5.

Applications of CUP

As explained in the previous sections, by synergizing CS and streak imaging, the CUP technique can realize single-shot ultrafast optical imaging in receive-only mode. In recent years, manifold improvements in this technique have enabled the direct measurement of many complex phenomena and processes that were formerly inaccessible to ultrafast optics. Several representative areas of investigation are reviewed in this section, including capturing the flight of photons, imaging at high speed in 3-D, recording the spatiotemporal evolution of ultrashort pulses, and enhancing image information security.

5.1.

Capturing Flying Photons

The capture of light during its propagation is a touchstone for ultrafast optical imaging techniques, and a variety of schemes have been proposed to accomplish it, including CUP. Using the first-generation CUP system described in Sec. 2, Gao et al. demonstrated the basic principles of light propagation by imaging laser pulses reflecting, refracting, and racing in different media in real time for the first time. Further, they modified the setup with a dichroic filter design to develop the spectrally resolvable CUP shown in Fig. 11(a) and successfully recorded the pulsed-laser-pumped fluorescence emission process of rhodamine.19 These results are shown in Fig. 11(b). With the creation of LLE-CUP, described in detail in Sec. 3.1.2, Liang et al. recorded a photonic Mach cone propagating in scattering material for the first time,33 presenting the formation and propagation images shown in Fig. 11(c). The experimental results are in excellent agreement with theoretical predictions by time-resolved Monte Carlo simulation.5052 In addition, although the propagation of photonic Mach cones had been previously observed via pump-probe methods,53,54 this was the first time that a single-shot, real-time observation of traveling photonic Mach cones induced by scattering was achieved. By capturing light propagation in scattering media in real-time, CUP demonstrated great promise for advancing biomedical instrumentation for imaging scattering dynamics.5557

Fig. 11

(a) Spectral separation unit of the dual-color CUP system; (b) an ultrafast laser-induced fluorescence process revealed by dual-color CUP; (c) the photonic Mach cone dynamics obtained by LLE-CUP. Figures reprinted from Refs. 19 and 33.

AP_2_1_014003_f011.png

5.2.

Recording Three-Dimensional Objects

3-D imaging is used in many applications,5867 and numerous techniques have been developed, including structured illumination,68,69 holography,70 streak imaging,71,72 integral imaging,73 multiple camera or multiple single-pixel detector photogrammetry,74,75 and time-of-flight (ToF) detection based on kinect sensors76 and single-photon avalanche diodes.77,78 Recently, these 3-D imaging techniques have been increasingly challenged to capture information fast.

ToF detection is a common method of 3-D imaging that is based on collecting scattered photons from multiple shots of objects carrying a variety of tags. Although it offers high detection sensitivity, multiple-shot acquisition still falls short in imaging fast-moving 3-D objects. To overcome this difficulty, single-shot ToF detection approaches have been developed.7984 However, within the limited imaging speeds of CMOS cameras and the illuminating pulse widths, 3-D imaging speeds have been limited to 30  Hz, with a depth resolution of 10  cm. Liang et al. developed a new 3-D imaging system, named ToF-CUP,35 that satisfied the single-shot requirement in ToF detection using a CUP device. In the ToF-CUP system, the CUP camera detects the photons backscattered from a 3-D object illuminated by a laser pulse. By calculating the times of the round-trip ToF signals between the illuminated surface and detector, the relative depths of the light incidence points on the object’s surface can be recovered. The experimental results for two static models and a dynamic two-ball rotation are shown in Fig. 12. Especially in dynamic detection, the ToF-CUP system captured the rotation of this two-ball system by sequentially acquiring images at a speed of 75 volumes per second. Each image was reconstructed to a 3-D (x,y,z) datacube, and these datacubes were further formed into a time-lapse four-dimensional (x,y,z,t) datacube.

Fig. 12

3-D images of (a) static targets and (b) two-ball rotation, obtained by CUP. Figures reprinted from Ref. 35.

AP_2_1_014003_f012.png

The ToF-CUP system is an ingenious variation of the CUP system and exhibits markedly superior performance in imaging speed (75 Hz) and depth resolution (10 mm) for single-shot 3-D imaging. It is believed that the superiority of CUP can effectively advance existing 3-D imaging technologies beyond present bottlenecks. Based on ToF-CUP, more 3-D CUP systems will be proposed in the future, such as combining a CUP camera with a structured illumination system or a holographic system. Given the ability of ToF-CUP in 3-D imaging, it is promising to be widely used in bioimaging, remote sensing, machine vision, and so on.

5.3.

Measuring the Spatiotemporal Intensity of Ultrashort Laser Pulses

The spatiotemporal measurement of ultrashort laser pulses provides important reference values for studies in ultrafast physics, such as explorations of second harmonic generation. In studying such physical processes, the characteristics of an ultrashort laser pulse are quite important, including its frequency information, temporal information, spatial information, and their interrelationship. However, most technologies for laser pulse measurement provide only temporal intensity information without spatial resolution, including optical autocorrelators, devices using spectral phase interferometry for direct electric-field reconstruction,85 and frequency-resolved optical gating devices.86 Since these mainstream techniques are generally limited by direct integration over the transverse coordinates, they can obtain only the temporal information of ultrashort laser pulses.

To extend the information that can be achieved from one image, the CUP technique was employed to simultaneously explore the spatiotemporal information of laser pulses with multiple wavelength components.87 A Ti:sapphire regenerative amplifier and a barium borate crystal were used to generate picosecond laser pulses with a fundamental central wavelength of 800 nm and second harmonics of 400 nm. Consequently, the spatiotemporal intensity evolution of the generated dual-color picosecond laser field was obtained as shown in Fig. 13(a). Clearly, CUP precisely obtained not only the pulse durations and spatial evolution of subpulses but also the time delay between them.

Fig. 13

Spatiotemporal evolutions of (a) a dual-color picosecond laser field and (b) the temporal focusing of a femtosecond laser pulse. Figures reprinted from Refs. 34 and 87.

AP_2_1_014003_f013.png

In a related effort, Liang et al. utilized the T-CUP system introduced in Sec. 3.1.3 to realize real-time, ultrafast, passive imaging of temporal focusing.34 There are two major features in temporal focusing: the shortest pulse width locates at the focal plane of the lens88 and the angular dispersion of the grating induces a pulse front tilt.89 To observe the phenomenon experimentally, a typical temporal focusing scenario of a femtosecond laser pulse was generated with a diffractive grating and a 4f imaging system. Actually, the pulse front tilt in this experiment was determined by the overall magnification ratio of the 4f system, the central wavelength of the ultrashort pulse, and the grating period.90,91 From the front-view and side-view detections, the T-CUP system respectively recorded the impingement of the tilted laser pulse front sweeping along the y-axis of the temporal focusing plane and the full evolution of the pulse propagation across this focusing plane, as shown in Fig. 13(b).

Compared with other single-shot ultrafast imaging techniques,1618, 9296 it is obvious that T-CUP is currently the only technology capable of observing temporal focusing in real time. Unlike STRIPED FISH,97 CUP avoids the need for a reference laser pulse, which provides a simpler measurement system. Moreover, based on the spectral response of the streak camera, CUP can measure laser fields with multiple wavelengths covering a wide spectral range. Thus, CUP clearly reveals the complex evolutions of ultrafast dynamics, paving the way for single-shot characterization of ultrashort laser fields in different circumstances.

5.4.

Protecting Image Information Security

Information and communication security are critical for national security, enterprise operations, and personal privacy, but the advent of supercomputers and future quantum computers has made it much easier to attack digital information in repositories and in transmission. Recently, a quantum key distribution (QKD) cryptographic technique was developed to protect information and communication security,98 and a series of studies have demonstrated that this cryptographic technique can maintain security in a variety of research fields.99103 In contrast to traditional public-key cryptography methods, such as elliptic curve cryptography, the digital signature algorithm, and the advanced encryption standard, a QKD system uses quantum mechanics to guarantee secure communication by enabling two parties to produce a shared random secret key known only to them.98,104,105 However, the relatively low-key generation rate of QKD greatly limits the information transmission bandwidth.106

To improve this limitation, Yang et al. developed a new hybrid classical-quantum cryptographic scheme by combining QKD and a CS algorithm that improves the information transmission bandwidth.107 This approach employs the mathematical model of CUP in Fig. 2, where the quantum keys generated by QKD were used to encrypt and decrypt compressed 3-D image information, and a CS theory was utilized to encode and decode ciphertext. As shown in Fig. 14, the quantum key generated by the QKD system is transmitted by the quantum channel, whereas the ciphertext encoded by the CS algorithm is transmitted by the classical channel. Because a CS algorithm is a nondeterministic polynomial hard (NP-hard) problem and this approach is looking for an approximate solution, the CS-QKD system can obtain higher encryption efficiency under low key generation rate conditions. Based on analyses of the normalized correlation coefficient in several attack trials, the CS-QKD scheme has been proven to effectively improve the information transmission bandwidth by a factor of approximately three and to ensure secure communication at a random code error rate of 3% and interception rate of 19.5%. This scheme improves the information transmission bandwidth of both the quantum and classical channels. Meanwhile, it enables evaluating the information and communication security in real time by monitoring the QKD system. Overall, this interdisciplinary study could advance the hybrid classical-quantum cryptographic technique to a new level and find practical applications in information and communication security.

Fig. 14

A schematic diagram of the experimental design of compressed 3-D image information secure communication. Figure reprinted from Ref. 107.

AP_2_1_014003_f014.png

6.

Conclusions and Prospects

In this mini-review, we have focused on recent advances in CUP. In the evolution from its first implementations to current systems that can capture ultrafast optical events at imaging speeds as high as 10 Tfps, CUP has achieved an unprecedented ability to visualize irreversible transient phenomena with single-shot detection. In addition, a variety of technical improvements in both data acquisition and image reconstruction have strengthened the capabilities of this technique. Furthermore, by extending the CS model to existing methods such as STAMP, UED/UEM, and QKD, the CUP modality has shown multiple possibilities for fusion with other techniques to achieve remarkable improvements.

In just a few years, CUP has achieved the highest sequence depth and a rather high imaging speed among various single-shot ultrafast imaging techniques, but it still lags in spatial resolution or pixel counts per frame. To address this shortcoming, an all-optical design such as COSUP, a multiple-channel design such as LLE-CUP or multiencoding CUP, and a code optimization design such as a GA-assisted approach can be pursued. Inspired by these strategies, other schemes could be developed. For example, by combining an electro-optical deflector with the deflection angle acceleration technique, an all-optical design has a great chance to push the imaging speed beyond 100 billion fps. In addition, a spectrally resolved CUP scheme capable of resolving transient temporal–spatial–spectral information simultaneously can be realized by inserting spectral elements into current CUP systems. With regard to improving image quality, increasingly accurate and intelligent reconstruction algorithms are considered of great importance. Recently, with the continuous maturity of deep learning in artificial intelligence, this technology has been utilized in computational imaging methods, such as super-resolution imaging,108,109 lensless imaging,110,111 and ghost imaging.112 It will be a significant step forward when deep learning is employed with CUP to recover an event precisely and reliably. In addition, the domains in which the scene possesses higher sparsity can be explored to further improve the efficiency and robustness of this technique. There is every reason to expect further progress and additional applications of this rising methodology in the future.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (Grant Nos. 91850202, 11774094, 11727810, 11804097, and 61720106009), the Science and Technology Commission of Shanghai Municipality (Grant Nos. 19560710300 and 17ZR146900), and the China Postdoctoral Science Foundation (Grant No. 2018M641958).

References

1. 

B. Clegg, The Man Who Stopped Time: The Illuminating Story of Eadweard Muybridge--Pioneer Photographer, Father of the Motion Picture, Murderer, Joseph Henry Press, Washington, D.C (2007). Google Scholar

2. 

S. X. Hu and L. A. Collins, “Attosecond pump probe: exploring ultrafast electron motion inside an atom,” Phys. Rev. Lett., 96 (7), 073004 (2006). https://doi.org/10.1103/PhysRevLett.96.073004 PRLTAO 0031-9007 Google Scholar

3. 

C. P. Hauri et al., “Generation of intense, carrier-envelope phase-locked few-cycle laser pulses through filamentation,” Appl. Phys. B, 79 (6), 673 –677 (2004). https://doi.org/10.1007/s00340-004-1650-z Google Scholar

4. 

T. Gaumnitz et al., “Streaking of 43-attosecond soft-x-ray pulses generated by a passively CEP-stable mid-infrared driver,” Opt. Express, 25 (22), 27506 –27518 (2017). https://doi.org/10.1364/OE.25.027506 OPEXFF 1094-4087 Google Scholar

5. 

S. A. Hilbert et al., “Temporal lenses for attosecond and femtosecond electron pulses,” Proc. Natl. Acad. Sci. U. S. A., 106 (26), 10558 –10563 (2009). https://doi.org/10.1073/pnas.0904912106 Google Scholar

6. 

S. P. Weathersby et al., “Mega-electron-volt ultrafast electron diffraction at SLAC National Accelerator Laboratory,” Rev. Sci. Instrum., 86 (7), 073702 (2015). https://doi.org/10.1063/1.4926994 RSINAK 0034-6748 Google Scholar

7. 

Y. Morimoto and P. Baum, “Diffraction and microscopy with attosecond electron pulse trains,” Nat. Phys., 14 (3), 252 –256 (2018). https://doi.org/10.1038/s41567-017-0007-6 NPAHAX 1745-2473 Google Scholar

8. 

M. T. Hassan, “Attomicroscopy: from femtosecond to attosecond electron microscopy,” J. Phys. B, 51 (3), 032005 (2018). https://doi.org/10.1088/1361-6455/aaa183 Google Scholar

9. 

D. R. Solli et al., “Optical rogue waves,” Nature, 450 (7172), 1054 –1057 (2007). https://doi.org/10.1038/nature06402 Google Scholar

10. 

B. J. Siwick et al., “An atomic-level view of melting using femtosecond electron diffraction,” Science, 302 (5649), 1382 –1385 (2003). https://doi.org/10.1126/science.1090052 SCIEAS 0036-8075 Google Scholar

11. 

J. Yang et al., “Imaging CF3I conical intersection and photodissociation dynamics with ultrafast electron diffraction,” Science, 361 (6397), 64 –67 (2018). https://doi.org/10.1126/science.aat0049 SCIEAS 0036-8075 Google Scholar

12. 

R. S. Craxton et al., “Direct-drive inertial confinement fusion: a review,” Phys. Plasmas, 22 (11), 110501 (2015). https://doi.org/10.1063/1.4934714 PHPAEN 1070-664X Google Scholar

13. 

J. Y. Liang et al., “Single-shot ultrafast optical imaging,” Optica, 5 (9), 1113 –1127 (2018). https://doi.org/10.1364/OPTICA.5.001113 Google Scholar

14. 

V. Tiwari, M. Sutton and S. McNeill, “Assessment of high speed imaging systems for 2D and 3D deformation measurements: methodology development and validation,” Exp. Mech., 47 (4), 561 –579 (2007). https://doi.org/10.1007/s11340-006-9011-y EXMCAZ 0014-4851 Google Scholar

15. 

X. Wang et al., “High-frame-rate observation of single femtosecond laser pulse propagation in fused silica using an echelon and optical polarigraphy technique,” Appl. Opt., 53 (36), 8395 –8399 (2014). https://doi.org/10.1364/AO.53.008395 APOPAI 0003-6935 Google Scholar

16. 

K. Nakagawa et al., “Sequentially timed all-optical mapping photography (STAMP),” Nat. Photonics, 8 (9), 695 –700 (2014). https://doi.org/10.1038/nphoton.2014.163 NPAHBY 1749-4885 Google Scholar

17. 

T. Kakue et al., “Digital light-in-flight recording by holography by use of a femtosecond pulsed laser,” IEEE J. Sel. Top. Quantum Electron., 18 (1), 479 –485 (2012). https://doi.org/10.1109/JSTQE.2011.2147281 IJSQEN 1077-260X Google Scholar

18. 

N. H. Matlis, A. Axley and W. P. Leemans, “Single-shot ultrafast tomographic imaging by spectral multiplexing,” Nat. Commun., 3 1111 (2012). https://doi.org/10.1038/ncomms2120 NCAOBW 2041-1723 Google Scholar

19. 

L. Gao et al., “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature, 516 (7529), 74 –77 (2014). https://doi.org/10.1038/nature14005 Google Scholar

20. 

F. Mochizuki et al., “Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor,” Opt. Express, 24 (4), 4155 –4176 (2016). https://doi.org/10.1364/OE.24.004155 OPEXFF 1094-4087 Google Scholar

21. 

D. Dudley, W. M. Duncan and J. Slaughter, “Emerging digital micromirror device (DMD) applications,” Proc. SPIE, 4985 14 –25 (2003). https://doi.org/10.1117/12.480761 PSISDG 0277-786X Google Scholar

22. 

R. M. Willett, R. F. Marcia and J. M. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” Opt. Eng., 50 (7), 072601 (2011). https://doi.org/10.1117/1.3596602 Google Scholar

23. 

E. J. Candès, J. K. Romberg and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, 52 (2), 489 –509 (2006). https://doi.org/10.1109/TIT.2005.862083 IETTAW 0018-9448 Google Scholar

24. 

E. J. Candès, J. K. Romberg and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math., 59 (8), 1207 –1223 (2006). https://doi.org/10.1002/(ISSN)1097-0312 CPMAMV 0010-3640 Google Scholar

25. 

E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?,” IEEE Trans. Inf. Theory, 52 5406 –5425 (2006). https://doi.org/10.1109/TIT.2006.885507 IETTAW 0018-9448 Google Scholar

26. 

J. A. Tropp and S. J. Wright, “Computational methods for sparse solution of linear inverse problems,” Proc. IEEE, 98 (6), 948 –958 (2010). https://doi.org/10.1109/JPROC.2010.2044010 IEEPAD 0018-9219 Google Scholar

27. 

C. S. Yang et al., “Optimizing codes for compressed ultrafast photography by the genetic algorithm,” Optica, 5 (2), 147 –151 (2018). https://doi.org/10.1364/OPTICA.5.000147 Google Scholar

28. 

J. M. Bioucas-Dias and M. A. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process., 16 (12), 2992 –3004 (2007). https://doi.org/10.1109/TIP.2007.909319 IIPRE4 1057-7149 Google Scholar

29. 

C. Yang et al., “Compressed ultrafast photography by multi-encoding imaging,” Laser Phys. Lett., 15 (11), 116202 (2018). https://doi.org/10.1088/1612-202X/aae198 1612-2011 Google Scholar

30. 

M. Elad, “Optimized projections for compressed sensing,” IEEE Trans. Signal Process., 55 (12), 5695 –5702 (2007). https://doi.org/10.1109/TSP.2007.900760 ITPRED 1053-587X Google Scholar

31. 

J. M. Duarte-Carvajalino et al., “Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Trans. Image Process., 18 (7), 1395 –1408 (2009). https://doi.org/10.1109/TIP.2009.2022459 IIPRE4 1057-7149 Google Scholar

32. 

V. Abolghasemi et al., “On optimization of the measurement matrix for compressive sensing,” in Proc. 18th Eur. Signal Process. Conf. EUSIPCO’10, 427 –431 (2010). Google Scholar

33. 

J. Liang et al., “Single-shot real-time video recording of a photonic Mach cone induced by a scattered light pulse,” Sci. Adv., 3 (1), e1601814 (2017). https://doi.org/10.1126/sciadv.1601814 STAMCV 1468-6996 Google Scholar

34. 

J. Liang, L. Zhu and L. V. Wang, “Single-shot real-time femtosecond imaging of temporal focusing,” Light Sci. Appl., 7 (1), 42 (2018). https://doi.org/10.1038/s41377-018-0044-7 Google Scholar

35. 

J. Y. Liang et al., “Encrypted three-dimensional dynamic imaging using snapshot time-of-fight compressed ultrafast photography,” Sci. Rep., 5 15504 (2015). https://doi.org/10.1038/srep15504 SRCEC3 2045-2322 Google Scholar

36. 

C. S. Yang et al., “Improving the image reconstruction quality of compressed ultrafast photography via an augmented Lagrangian algorithm,” J. Opt., 21 (3), 035703 (2019). https://doi.org/10.1088/2040-8986/ab00d9 Google Scholar

37. 

L. Zhu et al., “Space- and intensity-constrained reconstruction for compressed ultrafast photography,” Optica, 3 (7), 694 –697 (2016). https://doi.org/10.1364/OPTICA.3.000694 Google Scholar

38. 

A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis., 20 (1–2), 89 –97 (2004). https://doi.org/10.1023/B:JMIV.0000011325.36760.1e Google Scholar

39. 

M. V. Afonso, J. M. Bioucas-Dias and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process., 20 (3), 681 –695 (2011). https://doi.org/10.1109/TIP.2010.2076294 IIPRE4 1057-7149 Google Scholar

40. 

J. Nocedal and S. J. Wright, Numerical Optimization, 511 –513 New York (2006). Google Scholar

41. 

M. A. T. Figueiredo, R. D. Nowak and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process., 1 (4), 586 –597 (2008). https://doi.org/10.1109/JSTSP.2007.910281 Google Scholar

42. 

J. Hunt et al., “Metamaterial apertures for computational imaging,” Science, 339 (6117), 310 –313 (2013). https://doi.org/10.1126/science.1230054 SCIEAS 0036-8075 Google Scholar

43. 

Y. Lu et al., “Compressed ultrafast spectral–temporal photography,” Phys. Rev. Lett., 122 (19), 193904 (2019). https://doi.org/10.1103/PhysRevLett.122.193904 PRLTAO 0031-9007 Google Scholar

44. 

D. L. Qi et al., “Compressed ultrafast electron diffraction imaging through electronic encoding,” Phys. Rev. Appl., 10 (5), 054061 (2018). https://doi.org/10.1103/PhysRevApplied.10.054061 PRAHB2 2331-7019 Google Scholar

45. 

X. L. Liu et al., “Single-shot real-time sub-nanosecond electron imaging aided by compressed sensing: analytical modeling and simulation,” Micron, 117 47 –54 (2019). https://doi.org/10.1016/j.micron.2018.11.003 MICNB2 0047-7206 Google Scholar

46. 

X. L. Liu et al., “Single-shot compressed optical-streaking ultra-high-speed photography,” Opt. Lett., 44 (6), 1387 –1390 (2019). https://doi.org/10.1364/OL.44.001387 OPLEDP 0146-9592 Google Scholar

47. 

T. Chen et al., “Ultrasensitive fluorescent proteins for imaging neuronal activity,” Nature, 499 (7458), 295 –300 (2013). https://doi.org/10.1038/nature12354 Google Scholar

48. 

H. Mikami, L. Gao and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics, 5 (4), 497 –509 (2016). https://doi.org/10.1515/nanoph-2016-0026 Google Scholar

49. 

D. Jaque and F. Vetrone, “Luminescence nanothermometry,” Nanoscale, 4 (15), 4301 –4326 (2012). https://doi.org/10.1039/c2nr30764b NANOHL 2040-3364 Google Scholar

50. 

S. T. Flock et al., “Monte Carlo modeling of light propagation in highly scattering tissues—I. Model predictions and comparison with diffusion theory,” IEEE Trans. Biomed. Eng., 36 (12), 1162 –1168 (1989). https://doi.org/10.1109/TBME.1989.1173624 IEBEAX 0018-9294 Google Scholar

51. 

C. Zhu and Q. Liu, “Review of Monte Carlo modeling of light transport in tissues,” J. Biomed. Opt., 18 (5), 050902 (2013). https://doi.org/10.1117/1.JBO.18.5.050902 JBOPFO 1083-3668 Google Scholar

52. 

L. V. Wang and H. I. Wu, Biomedical Optics: Principles and Imaging, Wiley, New Jersey (2009). Google Scholar

53. 

R. M. Koehl, S. Adachi and K. A. Nelson, “Direct visualization of collective wavepacket dynamics,” J. Phys. Chem. A, 103 (49), 10260 –10267 (1999). https://doi.org/10.1021/jp9922007 JPCAFH 1089-5639 Google Scholar

54. 

Z. Wang, F. Su and F. A. Hegmann, “Ultrafast imaging of terahertz Cherenkov waves and transition-like radiation in LiNbO3,” Opt. Express, 23 (6), 8073 –8086 (2015). https://doi.org/10.1364/OE.23.008073 OPEXFF 1094-4087 Google Scholar

55. 

D. Huang et al., “Optical coherence tomography,” Science, 254 (5035), 1178 –1181 (1991). https://doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar

56. 

A. N. Obeid et al., “A critical review of laser Doppler flowmetry,” J. Med. Eng. Technol., 14 (5), 178 –181 (1990). https://doi.org/10.3109/03091909009009955 JMTEDN 0309-1902 Google Scholar

57. 

T. Durduran et al., “Diffuse optics for tissue monitoring and tomography,” Rep. Prog. Phys., 73 (7), 076701 (2010). https://doi.org/10.1088/0034-4885/73/7/076701 RPPHAG 0034-4885 Google Scholar

58. 

K. Omasa, F. Hosoi and A. Konishi, “3D lidar imaging for detecting and understanding plant responses and canopy structure,” J. Exp. Bot., 58 (4), 881 –898 (2007). https://doi.org/10.1093/jxb/erl142 JEBOA6 1460-2431 Google Scholar

59. 

S. L. Liu et al., “Fast and high-accuracy localization for three-dimensional single-particle tracking,” Sci. Rep., 3 2462 (2013). https://doi.org/10.1038/srep02462 SRCEC3 2045-2322 Google Scholar

60. 

B. Javidi, F. Okano and J. Y. Son, Three-Dimensional Imaging, Visualization, and Display, Springer, New York (2009). Google Scholar

61. 

A. Koschan et al., 3D Imaging for Safety and Security, Springer, New York (2007). Google Scholar

62. 

T. Bell and S. Zhang, “Toward superfast three-dimensional optical metrology with digital micromirror device platforms,” Opt. Eng., 53 (11), 112206 (2014). https://doi.org/10.1117/1.OE.53.11.112206 Google Scholar

63. 

J. Kittler et al., “3D assisted face recognition: a survey of 3D imaging, modelling and recognition approaches,” 114 –120 (2005). https://doi.org/10.1109/CVPR.2005.377 Google Scholar

64. 

P. Dickson et al., “Mosaic generation for under vehicle inspection,” 251 –256 (2002). https://doi.org/10.1109/ACV.2002.1182190 Google Scholar

65. 

S. R. Sukumar et al., “Robotic three-dimensional imaging system for under-vehicle inspection,” J. Electron. Imaging, 15 (3), 033008 (2006). https://doi.org/10.1117/1.2238565 JEIME5 1017-9909 Google Scholar

66. 

, “Deliver mission critical insights,” http://www.zebraimaging.com/defense/ Google Scholar

67. 

C. W. Trussell, “3D imaging for army applications,” Proc. SPIE, 4377 126 –131 (2001). https://doi.org/10.1117/12.440100 PSISDG 0277-786X Google Scholar

68. 

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics, 3 (2), 128 –160 (2011). https://doi.org/10.1364/AOP.3.000128 AOPAC7 1943-8206 Google Scholar

69. 

P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt., 45 (21), 5086 –5091 (2006). https://doi.org/10.1364/AO.45.005086 APOPAI 0003-6935 Google Scholar

70. 

B. Javidi, G. Zhang and J. Li, “Encrypted optical memory using double-random phase encoding,” Appl. Opt., 36 (5), 1054 –1058 (1997). https://doi.org/10.1364/AO.36.001054 APOPAI 0003-6935 Google Scholar

71. 

A. Velten et al., “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun., 3 (3), 745 (2012). https://doi.org/10.1038/ncomms1747 NCAOBW 2041-1723 Google Scholar

72. 

G. Satat et al., “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun., 6 6796 (2015). https://doi.org/10.1038/ncomms7796 NCAOBW 2041-1723 Google Scholar

73. 

X. Xiao et al., “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt., 52 (4), 546 –560 (2013). https://doi.org/10.1364/AO.52.000546 APOPAI 0003-6935 Google Scholar

74. 

B. Sun et al., “3D computational imaging with single-pixel detectors,” Science, 340 (6134), 844 –847 (2013). https://doi.org/10.1126/science.1234454 SCIEAS 0036-8075 Google Scholar

75. 

Y. Y. Chen et al., “A 3-D surveillance system using multiple integrated cameras,” 1930 –1935 (2010). https://doi.org/10.1109/ICINFA.2010.5512016 Google Scholar

76. 

J. Sell and P. O’Connor, “The Xbox One system on a chip and kinect sensor,” IEEE Micro, 34 (2), 44 –53 (2014). https://doi.org/10.1109/MM.2014.9 IEMIDZ 0272-1732 Google Scholar

77. 

G. Gariepy et al., “Detection and tracking of moving objects hidden from view,” Nat. Photonics, 10 23 –26 (2015). https://doi.org/10.1038/nphoton.2015.234 NPAHBY 1749-4885 Google Scholar

78. 

X. Liu et al., “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature, 572 (7771), 620 –623 (2019). https://doi.org/10.1038/s41586-019-1461-3 Google Scholar

79. 

A. McCarthy et al., “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express, 21 (7), 8904 –8915 (2013). https://doi.org/10.1364/OE.21.008904 OPEXFF 1094-4087 Google Scholar

80. 

A. Medina, F. Gayá and F. del Pozo, “Compact laser radar and three-dimensional camera,” J. Opt. Soc. Am. A, 23 (4), 800 –805 (2006). https://doi.org/10.1364/JOSAA.23.000800 JOAOD6 0740-3232 Google Scholar

81. 

S. Gokturk, H. Yalcin and C. Bamji, “A time-of-flight depth sensor: system description, issues and solutions,” 35 –44 (2004). Google Scholar

82. 

G. J. Iddan and G. Yahav, “Three-dimensional imaging in the studio and elsewhere,” Proc. SPIE, 4298 48 –55 (2001). https://doi.org/10.1117/12.424913 PSISDG 0277-786X Google Scholar

83. 

Advanced Scientific Concepts, Inc., “Products overview,” http://www.advancedscientificconcepts.com/products/Products.html Google Scholar

84. 

R. Stettner, H. Bailey and R. D. Richmond, “Eye-safe laser radar 3D imaging,” Proc. SPIE, 4377 46 –56 (2001). https://doi.org/10.1117/12.440125 PSISDG 0277-786X Google Scholar

85. 

C. Iaconis and I. A. Walmsley, “Spectral phase interferometry for direct electric-field reconstruction of ultrashort optical pulses,” Opt. Lett., 23 (10), 792 –794 (1998). https://doi.org/10.1364/OL.23.000792 OPLEDP 0146-9592 Google Scholar

86. 

D. J. Kane and R. Trebino, “Single-shot measurement of the intensity and phase of an arbitrary ultrashort pulses by using frequency-resolved optical gating,” Opt. Lett., 18 (10), 823 –825 (1993). https://doi.org/10.1364/OL.18.000823 OPLEDP 0146-9592 Google Scholar

87. 

F. Y. Cao et al., “Single-shot spatiotemporal intensity measurement of picosecond laser pulses with compressed ultrafast photography,” Opt. Lasers Eng., 116 89 –93 (2019). https://doi.org/10.1016/j.optlaseng.2019.01.002 Google Scholar

88. 

G. H. Zhu et al., “Simultaneous spatial and temporal focusing of femtosecond pulses,” Opt. Express, 13 (6), 2153 –2159 (2005). https://doi.org/10.1364/OPEX.13.002153 OPEXFF 1094-4087 Google Scholar

89. 

D. Oron, E. Tal and Y. Silberberg, “Scanningless depth-resolved microscopy,” Opt. Express, 13 (5), 1468 –1476 (2005). https://doi.org/10.1364/OPEX.13.001468 OPEXFF 1094-4087 Google Scholar

90. 

Z. Bor et al., “Femtosecond pulse front tilt caused by angular dispersion,” Opt. Eng., 32 (10), 2501 –2504 (1993). https://doi.org/10.1117/12.145393 Google Scholar

91. 

J. Hebling, “Derivation of the pulse front tilt caused by angular dispersion,” Opt. Quantum Electron., 28 (12), 1759 –1763 (1996). https://doi.org/10.1007/BF00698541 OQELDI 0306-8919 Google Scholar

92. 

T. Kubota et al., “Moving picture recording and observation of three-dimensional image of femtosecond light pulse propagation,” Opt. Express, 15 (22), 14348 –14354 (2007). https://doi.org/10.1364/OE.15.014348 OPEXFF 1094-4087 Google Scholar

93. 

Z. Y. Li et al., “Single-shot tomographic movies of evolving light-velocity objects,” Nat. Commun., 5 3085 (2014). https://doi.org/10.1038/ncomms4085 NCAOBW 2041-1723 Google Scholar

94. 

K. Goda, K. K. Tsia and B. Jalali, “Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena,” Nature, 458 (7242), 1145 –1149 (2009). https://doi.org/10.1038/nature07980 Google Scholar

95. 

T. Suzukiet et al., “Single-shot 25-frame burst imaging of ultrafast phase transition of Ge2Sb2Te5 with a sub-picosecond resolution,” Appl. Phys. Express, 10 (9), 092502 (2017). https://doi.org/10.7567/APEX.10.092502 APEPC4 1882-0778 Google Scholar

96. 

A. Ehn et al., “FRAME: femtosecond videography for atomic and molecular dynamics,” Light Sci. Appl., 6 (9), e17045 (2017). https://doi.org/10.1038/lsa.2017.45 Google Scholar

97. 

P. Gabolde and R. Trebino, “Single-frame measurement of the complete spatiotemporal intensity and phase of ultrashort laser pulses using wavelength-multiplexed digital holography,” J. Opt. Soc. Am. B, 25 (6), A25 –A33 (2008). https://doi.org/10.1364/JOSAB.25.000A25 JOBPDE 0740-3224 Google Scholar

98. 

G. L. Long and X. S. Liu, “Theoretically efficient high-capacity quantum-key-distribution scheme,” Phys. Rev. A, 65 (3), 032302 (2002). https://doi.org/10.1103/PhysRevA.65.032302 Google Scholar

99. 

N. Gisin et al., “Quantum cryptography,” Rev. Mod. Phys., 74 (1), 145 –195 (2002). https://doi.org/10.1103/RevModPhys.74.145 RMPHAT 0034-6861 Google Scholar

100. 

T. Honjo et al., “Long-distance entanglement-based quantum key distribution over optical fiber,” Opt. Express, 16 (23), 19118 –19126 (2008). https://doi.org/10.1364/OE.16.019118 OPEXFF 1094-4087 Google Scholar

101. 

L. Gyongyosi, “Improved long-distance two-way continuous variable quantum key distribution over optical fiber,” FW2C.5 (2013). https://doi.org/10.1364/FIO.2013.FW2C.5 Google Scholar

102. 

D. J. Bernstein, “Introduction to post-quantum cryptography,” Post-Quantum Cryptography, 1 –14 Berlin (2009). Google Scholar

103. 

S. Ranganathan et al., “A three-party authentication for key distributed protocol using classical and quantum cryptography,” Int. J. Comput. Sci. Issues, 7 (5), 148 –153 (2010). Google Scholar

104. 

W. Liu et al., “Hybrid quantum private communication with continuous-variable and discrete-variable signals,” Sci. China Phys. Mech. Astron., 58 (2), 1 –7 (2015). https://doi.org/10.1007/s11433-014-5632-9 SCPMCL 1674-7348 Google Scholar

105. 

L. Gyongyosi and S. Imre, “Adaptive multicarrier quadrature division modulation for long-distance continuous-variable quantum key distribution,” Proc. SPIE, 9123 912307 (2014). https://doi.org/10.1117/12.2050095 PSISDG 0277-786X Google Scholar

106. 

L. C. Comandar et al., “Room temperature single-photon detectors for high bit rate quantum key distribution,” Appl. Phys. Lett., 104 (2), 021101 (2014). https://doi.org/10.1063/1.4855515 APPLAB 0003-6951 Google Scholar

107. 

C. S. Yang et al., “Compressed 3D image information and communication security,” Adv. Quantum Technol., 1 1800034 (2018). https://doi.org/10.1002/qute.v1.2 Google Scholar

108. 

C. Dong et al., “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intel., 38 295 –307 (2015). https://doi.org/10.1109/TPAMI.2015.2439281 ITPIDJ 0162-8828 Google Scholar

109. 

H. Wang et al., “Deep learning achieves super-resolution in fluorescence microscopy,” Nat. Methods, 16 103 –110 (2019). https://doi.org/10.1038/s41592-018-0239-0 1548-7091 Google Scholar

110. 

A. Sinha et al., “Lensless computational imaging through deep learning,” Optica, 4 1117 –1125 (2017). https://doi.org/10.1364/OPTICA.4.001117 Google Scholar

111. 

M. Lyu et al., “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics, 1 (3), 036002 (2019). https://doi.org/10.1117/1.AP.1.3.036002 AOPAC7 1943-8206 Google Scholar

112. 

M. Lyu et al., “Deep-learning-based ghost imaging,” Sci. Rep., 7 17865 (2017). https://doi.org/10.1038/s41598-017-18171-7 SRCEC3 2045-2322 Google Scholar

Biography

Dalong Qi received his PhD from East China Normal University, China, in 2017, with a period at the Max-Planck Institute for the Structure and Dynamics of Matter, Germany. He was a postdoctoral fellow at East China Normal University until 2019, and has been an associate professor there since then. His research interests include ultrafast optical imaging, computational imaging, and ultrafast electron diffraction.

Shian Zhang received his PhD from East China Normal University, China, in 2006. He was a senior engineer at Spectra-Physics, Inc., and a postdoctoral fellow at Arizona State University until 2009. He has been a professor at East China Normal University since 2012, with a period at Washington University in St. Louis as visiting scholar. His research interests include ultrafast optical imaging, computational imaging, and nonlinear optical microscopy.

Zhenrong Sun received his PhD from East China Normal University in 1998. He has been a professor at East China Normal University since 2001. His research interests include ultrafast optical imaging, femtosecond quantum control, femtosecond electron diffraction, and ultrafast laser spectroscopy.

Lihong V. Wang is the Bren Professor of Medical and Electrical Engineering at Caltech. He has published 530 journal articles (h-index: 134, citations: 74,000) and delivered 535 keynote/plenary/invited talks. He published the first functional photoacoustic CT and 3-D photoacoustic microscopy. He has received the Goodman Book Award, NIH Director’s Pioneer Award, OSA Mees Medal and Feld Award, IEEE Technical Achievement and Biomedical Engineering Awards, SPIE Chance Award, IPPA Senior Prize, and an honorary doctorate from Lund University, Sweden. He is a member of the National Academy of Engineering.

Biographies of the other authors are not available.

© The Authors. Published by SPIE and CLP under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Dalong Qi, Shian Zhang, Chengshuai Yang, Yilin He, Fengyan Cao, Jiali Yao, Pengpeng Ding, Liang Gao, Tianqing Jia, Jinyang Liang, Zhenrong Sun, and Lihong V. Wang "Single-shot compressed ultrafast photography: a review," Advanced Photonics 2(1), 014003 (28 February 2020). https://doi.org/10.1117/1.AP.2.1.014003
Received: 5 November 2019; Accepted: 12 February 2020; Published: 28 February 2020
JOURNAL ARTICLE
16 PAGES


SHARE
Advertisement
Advertisement
Back to Top