Time-of-flight (TOF) three-dimensional (3-D) imaging has importance for many applications, including robotics, security, and autonomous vehicles. Long-range light detection and ranging (LiDAR) systems typically rely on illuminating a scene with a short-pulsed laser, and temporally correlating the reflected light intensity with the outgoing pulse to determine the time of flight to different points in the scene. A depth map can then be accurately determined by multiplying the time of flight with the speed of light and a reflectivity map obtained from the amplitude of the reflected intensity.
The overall performance of a time-of-flight imaging system is determined by the choice of detector, the laser, the scanning hardware/strategy, the time-tagging electronics, and the image reconstruction algorithm. The most common approach for obtaining transverse spatial resolution from a LiDAR system uses a single time-resolving detector for determining the time of flight, one pixel at a time, by scanning the illumination and/or the detector across a field of view. Importantly, for long-range imaging applications where the reflected light intensity is low, the operation of this system demands the use of high-sensitivity single-photon counting devices and it is often desirable to be sensitive to nonvisible wavelengths.12.–3 When the detection is shot-noise limited, it is necessary to integrate the detections from many backscattered photons to improve statistical confidence. The inherent dead time of single-photon sensitive detectors, typically on the order of 10s of nanoseconds, prohibits the retrieval of short-range timing information from a single illumination pulse, leading to the use of high-repetition-rate pulsed lasers. Conventional opto-mechanical scanning technologies, for instance, a pair of galvanometer mirrors, typically have scan rates in the kHz regime. When used in combination with a raster scanning strategy, the acquisition time scales linearly with image resolution.
To acquire transverse spatial resolution, a desirable option might be to flood illuminate the scene and use an array of time-resolving detectors, such as Geiger mode single-photon avalanche detector (SPAD) arrays,45.–6 to acquire time of flight for each pixel simultaneously. Currently, these arrays have resolutions of a few thousand pixels, and can have limitations such as pixel cross talk, excessive dark noise, readout noise, and stringent requirements on readout clocks per pixel. This maturing technology is already indicating a promising future for compact LiDAR systems, but an interesting question arises when considering the fill factor, size, and overall quantum efficiency of these devices compared with a single large-area time-resolved detector when used in combination with scanning hardware/strategies and image reconstruction algorithms. It is the latter which forms the motivation for this work.
In the last few years, there have been a number of interesting demonstrations for recovering 3-D images using a single-pixel detector. One scheme scans a scene, pixel by pixel, line by line, using a pulsed illumination source and measures the reflected light using an avalanche photodiode (APD), where the first detected photon is used to recover depth and reflectivity via time of flight.7 An alternative method makes use of structured pulsed illumination and a SPAD,89.–10 but importantly, unlike raster scanning techniques is able to benefit from reduced acquisition times by employing compressed sensing principles, which takes advantage of the sparsity in natural scenes.1112.–13
In this work, we demonstrate a 3-D imaging system, capable of recovering millimetric depth precision, using a single large-area photomultiplier, a high-speed spatial light modulator, and a short-pulsed near-infrared laser. The use of a simplified compressed sensing strategy, used in conjunction with fast streaming of the time-resolved intensity histograms and an efficient image reconstruction algorithm, overcomes the time constraints of previous works and permits continuous real time image reconstruction at up to .
Our single-pixel 3-D imaging system, shown in Fig. 1, consists of a 100-fs pulsed laser with a repetition rate of 100 MHz (Toptica FemtoFErb) with a center wavelength of 780 nm and a high-speed digital-micromirror-device (DMD) (Texas Instruments Discovery 4100 supplied by Vialux, model V-7001) to provide time-varying structured illumination. A fast-response Geiger-mode photomultiplier tube (PicoQuant PMA192) is used in conjunction with a 50-mm diameter collection lens to detect the first backscattered photon resulting from each pattern interacting with the scene. Time-correlated single-photon counting (TCSPC) electronics (a customized Horiba DeltaHub) record the time of arrival for each detected “event” relative to the synchronization output provided by the laser. Our choice of TCSPC electronics enables continuous streaming of up to 20,000 histograms/s, with 25-ps bin widths, allowing for real-time signal processing suitable for this investigation.
For any single-pixel imaging system, which involves performing a series of measurements in time, the choice of scanning basis is an important consideration to optimize the image quality while minimizing the acquisition time. The finite modulation rate of the DMD implies a fundamental trade-off between acquisition time and image resolution. However, we note that most natural images exhibit similar characteristics, for example, sparsity in their spatial frequencies, the principle underpinning image compression techniques, which opens the possibility to compressively sense at the acquisition stage, such that fewer measurements than the number of image pixels are required.
A variety of approaches exist for performing sub-Nyquist sampling of a scene using a single-pixel camera. The most common approach in other work involves acquiring a series of measurements using a basis which is spatially incoherent with the scene properties, and utilizing a nonlinear optimization algorithm to recover an image whose spatial properties satisfy the prior knowledge of the scene characteristics.11,12 In general, computational overhead associated with the recovery for such problems exceeds the acquisition time, which can prohibit real-time imaging applications. Nonetheless, a variety of postprocess techniques have been proposed and demonstrated to recover video-rate performance on 2-D13,14 and 3-D imaging systems.89.–10
In a recent time-of-flight imaging demonstration15 using an integrating photodetector for detection, a subset of the naturally constructed Hadamard matrices16 was used to provide structured pulsed illumination. The corresponding backscattered intensities measured by the APD are subsequently used within a iterative reconstruction algorithm, having negligible computational overhead compared with the total acquisition time. This work was led by an understanding that for an iterative reconstruction algorithm the patterns associated with the largest signals have the most influence on final reconstructed image quality.17 It was noted that practical demonstrations of the so-called “evolutionary” compressed sensing strategy has limitations for scenes exhibiting dynamic behavior when the spatial properties change significantly from frame to frame and within each frame.
A subsequent investigation18 using a similar system but employing an integrating APD for detection made use of a visible camera to obtain 2-D images of the scene, which were used to determine the optimal subset of the basis for sampling 3-D information. Here, the 2-D image stream, provided by the visible camera, at a rate of 30 Hz, is used to continuously “simulate” the expected intensity signals for the entire basis, which are used to order the basis according to their magnitudes, from which an arbitrary subset of the basis is chosen for display on the DMD. This is an example of a stimulated or adaptive compressed sensing strategy.
In this work, the performance of the 3-D computational LiDAR system, described in Sec. 2, was evaluated by scanning the spatial properties of the scene using either the complete Hadamard basis or a subset of the Hadamard basis. The performance of the system using alternative compressive sensing strategies forms the basis of a follow-up publication.
The Hadamard basis has several important properties. Hadamard matrices, , are an orthogonal basis set, which are square matrices having values of which satisfy
An important feature of naturally constructed Hadamard matrices is that in each row, the number of “pixels” in the scene that is sampled is exactly 50%. We note that the natural Hadamard patterns once reshaped in two-dimensions for display on the DMD exhibit spatial properties ranging from coarse to fine resolution. Ordering and displaying the basis on the DMD according to the scale of the pattern resolution can be advantageous if the scene exhibits predominantly lower spatial properties, since the number of patterns required to enable image reconstruction can be significantly reduced.19 In this work, the Hadamard basis is ordered in this way, such that the spatial frequencies of the scene are effectively measured from lowest to highest, allowing the number of measurements acquired in any frame to be chosen arbitrarily by the camera operator.
It is worth noting that since the DMD is a binary modulator, and only one detector is being used in this demonstration, we are able to measure a signal corresponding to the image intensities overlapping with values in the Hadamard patterns of and would need to estimate the intensity measured for values. Instead we choose to display each pattern, consisting of values, followed immediately by a pattern containing the values (its negative). From these two measurements we obtain a differential measurement by subtracting one from the other, similar to performing heterodyne detection, which has the benefit of removing external noise arising from fluctuations in the source brightness or ambient intensity changes.
Figure 2 shows an illustration of sample Hadamard pattern pairs displayed on the DMD for a period of (during which the 780-nm laser pulses times) and corresponding to the histograms obtained from the TCSPC electronics. For each pair of patterns, we obtain a differential histogram, which is subsequently used to perform image reconstruction. Typically, each histogram contains photon detections.
Real-Time Image Reconstruction
For any single-pixel imaging system, the time-varying intensity signal associated with each projected pattern (length , pixel number) is directly proportional to the overlap between each pattern and the scene reflectivity (also length ), given by
Choosing to sample a scene using complete or incomplete Hadamard basis allows a simple iterative reconstruction algorithm to be employed, having negligible computational time on a standard computer processing unit. Thus after each pattern and corresponding histogram, an estimate of the 3-D image cube, , can be reconstructed according to
Operating the system in real time typically demands short integration times, thus the detected photon flux is often low. We can, however, apply “spatiotemporal” smoothing to the reconstructed image cube to help overcome the effects of Poissonian noise. In this work, we convolve the image cube with a normalized 3-D smoothing kernel having dimensions (3, 3, 5) given by
It has been shown that the ranging precision can be enhanced beyond the limits of the system hardware by performing a variety of techniques on the measured intensity signals, such as parametric deconvolution,20 curve fitting or interpolation.15 However, as a result of technical difficulties experienced in this investigation, we are unable to report on such improvements to the ranging at this time.
The signal processing and image reconstruction algorithm that were used to obtain the results presented here were designed and implemented entirely in the LabVIEW software development environment. An outline of the structure of the LabVIEW software used, summarizing the main functions, is provided in Appendix A.
In one experiment, a scene consisting of a mannequin head located at a distance of from the camera system was imaged using an 85-mm focal length lens, and signal processing was performed in real time at a rate of 3 fps. To achieve this, the first 333 Hadamard patterns of 1024 from the ordered sequence were displayed one after another continuously on the DMD. A histogramming range of 100 bins on the TCSPC electronics was chosen for subsequent image processing as described in Sec. 4. The modulation rate of the DMD was 2052 Hz, that is each pattern was displayed for , in which time the laser pulses 50,000 times resulting in “events” added to each histogram over a temporal range of 2.5 ns. As the repetition rate of the laser was 100 MHz, the maximum imaging range equates to . A sample of the reconstructed intensity and depth maps is shown in Fig. 3. Throughout the acquisition, the scene is dynamically changing to obtain a polystyrene mannequin head, a waving hand, a head wearing safety-glasses, and a hand giving a “thumb’s-up” gesture. The video can be found in the online supplementary materials.
In a separate experiment, the source was replaced with a 6-ps pulsed supercontinuum laser with a repetition rate of 60 MHz (Fianium femtopower) spectrally filtered , and a 50-mm focal length lens was used. The repetition rate of this source provides a maximum imaging range of . A scene consisting of a telescope, a suspended polystyrene ball, a polystyrene mannequin head, and an angled wall was arranged at a range of 8 m from the camera. All 4096 Hadamard patterns were used to reconstruct images at resolution. A histogramming range of 500 bins on the TCSPC electronics was chosen for this experiment and the modulation rate of the DMD was reduced to 50 Hz, i.e., each pattern was displayed for , in which time the laser pulses . For this scene, the mean number of detected “events” for each pattern was , added to the corresponding histogram over a temporal range of 12.5 ns. The reconstructed intensity and depth maps are shown in Fig. 4, where the number of patterns used for reconstruction is , and 4096, equivalent to a compression ratio of 10:1, 3:1, 1.3:1, and 1:1, respectively.
We can observe that for this scene, the intensity, and in particular the depth map, is well reconstructed at a compression ratio of 3:1, which enables the system to operate faster than when sampling with the complete basis.
We note that this method of reconstructing the 3-D image cube provides access to the pulsed illumination as it is propagated throughout the scene, equivalent to having a high-speed camera capturing at 40 billion . A sample of frames from this high-speed video is shown in Fig. 5 depicting the sheet of light as it reflects from the scene at different times.
Conclusions and Future Work
We have demonstrated a photon-counting computational LiDAR system, utilizing short-pulsed structured illumination and a fast-response photomultiplier tube, for reconstructing 3-D scenes at up to 3 fps. We demonstrated results obtained when applying a very simple compressive sensing strategy using a subset of the Hadamard basis ordered according to spatial frequencies. Importantly, this method allows for intensity and depth image reconstruction in less time than the acquisition which enables continuous real-time operation. A variety of alternative sampling strategies can also be employed, which may yield improved performance, for instance, the use of microscanning,21 spatially varying sampling strategies22 or utilizing deep learning,23 all of which are the subjects of ongoing work and follow-up publications. It is worth pointing out that this work has been carried out in a laboratory with control of the ambient lighting. Performing similar demonstrations in other scenarios, such as outdoors, would require consideration for the sensitivity of the photon-counting detector technology used (e.g., PMT). For instance, a narrowband spectral filter matching the output of the illumination source would be necessary on the detection channel to reduce the background count rate. Moreoever, operating at 780 nm the background count rate from solar activity may add excessive noise and significantly reduce overall performance and image quality from this system. Importantly, the operational spectra of the DMD (400 to 2500 nm) make them good candidates for extending these techniques to longer wavelengths, such as the short-wave infrared region, where there are several operational advantages, such as higher power eye-safe lasers, enhanced visibility at long range due to reduced atmospheric scattering, and significantly reduced solar background.
Outline of Software Implementation
The LabVIEW program developed for this experimental demonstration can be summarized by the following key operations:
1. Initializing and setup
• Construct and order the Hadamard patterns for displaying on DMD.
• Initialize the DMD and mode of operation (in this demonstration master mode).
• Initialize the Horiba DeltaHub histogramming electronics and mode of operation (in this demonstration slave mode).
• Upload complete basis to the available RAM onboard the DMD controller.
2. Pattern display on DMD
• An independent while loop running continuously pending a user input to stop.
• User defined control of the pattern subset size and/or pattern order for subsequent display on the DMD (in this demonstration, the size was 333 patterns and the order was unchanged).
• For each frame, the associated series of patterns were displayed at 2052 Hz, uninterrupted by computer communication.
3. Histogram data acquisition
4. Signal processing and image reconstruction
• An independent while loop running continuously pending a user input to stop.
• Preprocessing of histogram data to first ensure no data have been lost or corrupted.
• Recover differential intensity signals from pairs of patterns displayed on DMD (corresponding to positive/negative values in the Hadamard matrix).
• Calibrate the differential intensity signals to account for attenuation and speed of light propagation, thereby yielding depth of flight.
• Reconstruct 3-D image cube via iterative sum of known patterns weighted by histogram bin intensity.
• Perform spatiotemporal smoothing to 3-D image cube and apply thresholding to reduce noise
• Recover intensity image by integrating each pixel () in the image cube along the -(depth) dimension.
• Recover depth image by finding bin where intensity is maximum.
M.P.E. would like to thank Robert Lamb and David Humphreys at Leonardo UK for valuable discussions. M.P.E. acknowledges financial support from UK Quantum Technology Hub in Quantum Enhanced Imaging (Grant No. EP/M01326X/1) and the European Research Council (TWISTS, Grant No. 192382). M.J.P. acknowledges support from the Wolfson Foundation and the Royal Society. The authors declare that there are no conflicts of interest to disclose. The data for this article can be found in an open-access repository at http://dx.doi.org/10.5525/gla.researchdata.565.
Matthew Edgar is a research associate in the Optics Group at the University of Glasgow. He received his BSc and PhD degrees in physics and astronomy in 2007 and 2011, respectively. He started his research career in the Institute for Gravitational Research at Glasgow, developing advanced interferometric techniques to enhance the sensitivity of long-baseline gravitational wave detectors. Since joining the Optics Group at Glasgow, he has been investigating the use of camera technology to perform fundamental tests of quantum mechanics and more recently has been developing low-cost computational imaging systems for applications in methane imaging and 3-D imaging.
Steven Johnson is a research associate at the University of Glasgow working in the Optics Group. He received a MSci in theoretical physics and a PhD in ultracold atoms from the University of Birmingham in 2012. He has previously worked in industry specialising in CMOS imaging sensors. His research interests involve applications of temporally resolved computational imaging for measuring very fast phenomena.
David Phillips is a Royal Academy of Engineering research fellow in the Physics Department, University of Exeter. Since graduating with a physics degree in 2004, he has spent a few years working as a systems engineer in industry, a few months working in science policy within UK Parliament, and the rest of the time studying nanophysics and optics in academia. His current research interests are focused on computational imaging in scattering environments.
Miles Padgett holds the Kelvin Chair of Natural Philosophy at the University of Glasgow. He is fascinated by light both classical and quantum–specifically light’s momentum. In 2001, he was elected to fellowship of the Royal Society of Edinburgh and in 2014, the Royal Society, the UK’s National Academy. In 2009, with Les Allen, he won the IoP Young Medal, in 2014 the RSE Kelvin Medal in 2015 the Science of Light Prize from the EPS and in 2017 the Max Born award of the OSA.