We present a prototype light detection and ranging (lidar) system that compressively samples the scene using our deep learning optimised sampling basis and reconstruction algorithms. This approach improves scene reconstruction quality compared to an orthogonal sampling method, with reflectivity and depth accuracy improvements for one frame per second acquisition rates. This method may pave the way for improved scan-free lidar systems for driverless cars and for fully optimised sampling through to decision-making pipelines. The requirement for 3D imaging is a challenge across a range of sectors including gaming, robotics, health-care and automotive industries. Mature technologies such as radar and ultra-sound sensing are effective at long and short ranges respectively. With lidar capable of millimetric depth precision, with good spatial resolution at ranges of around 100 m, it has become a key technology in this area, with depth information typically gained through time-of-flight photon-counting measurements of a scanned laser spot. Single-pixel imaging (SPI) is an alternative imaging modality for recovering spatial information. SPI methods offer an alternative approach to spot-scanning, which allows a choice of sampling basis. Unlike scanning systems, the freedom to choose the sampling basis in SPI provides the opportunity to use compressed sensing techniques, where a high-quality image can be reconstructed from a number of measurements that is fewer than the number of pixels in the image. Compressed sensing has been demonstrated using an optimised imaging basis and reconstruction algorithm derived from a trained convolutional neural network. This deep learning approach achieves a 4% compression ratio, enabling lidar imaging using 25 times less measurements such that faster acquisition times can be used.
Gathering information of objects hidden from the field of view is an extremely relevant problem in many areas of science and technology. Some state-of-the-art techniques are able to detect and image an object behind an obstacle at the cost of high computational and processing times. Alternatively, other methods can track the object in real-time without giving information on the objects shape. Here we make use of a non-scanning ultrashort pulsed light source, a Single-Photon Avalanche Diode (SPAD), and artificial neural networks (ANNs) to demonstrate a system that can detect, identify, and track objects hidden from view. SPAD technology, characterised by a temporal resolution of 100 ps, provides us with the time traces of the light back-scattered by the environment (including the hidden object). By using different known objects placed at different known positions, we generate a library of time traces that are used to train the ANN algorithm. The application of the trained ANN algorithm in an experimental scenario allow us to identify unknown objects hidden from view in real time with cm resolution. These results open new routes for exciting novel machine learning applications with high impact in the fields of machine vision, self-driving cars, and defence.