In an optical Line-of-Sight (LOS) scenario, such as one involving a LIDAR system, the goal is to recover an image of a target in the direct path of the transmitter and receiver. In Non-Line-of-Sight (NLOS) scenarios the target is hidden from both the transmitter and the receiver by an occluder, i.e. a wall. Recent advancements in technology, computer vision and inverse light transport theory have shown that it is possible to recover an image of a hidden target by exploiting the temporal information encoded in multiple-scattered photons. The core idea is to acquire data using an optical system, composed of an ultra-fast laser that emits short pulses (in the order of femtoseconds) and a camera capable of recovering the photons time-of-flight information (a typical resolution is in the order of picoseconds). We reconstruct 3D images from this data based on the backprojection algorithm, a method typically found in the computational tomography field, which is parallelizable and memory efficient, although it only provides an approximate solution. Here we present improved backprojection algorithms for applications to large scale scenes with with a large number of scatterers and meters to hundreds of meters diameter. We apply these methods to the NLOS imaging of rooms and lunar caves.
Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function.
We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.
Light scattering is a primary obstacle to imaging in many environments. On small scales in biomedical microscopy and diffuse tomography scenarios scattering is caused by tissue. On larger scales scattering from dust and fog provide challenges to vision systems for self driving cars and naval remote imaging systems. We are developing scale models for scattering environments and investigation methods for improved imaging particularly using time of flight transient information.
With the emergence of Single Photon Avalanche Diode detectors and fast semiconductor lasers, illumination and capture on picosecond timescales are becoming possible in inexpensive, compact, and robust devices. This opens up opportunities for new computational imaging techniques that make use of photon time of flight.
Time of flight or range information is used in remote imaging scenarios in gated viewing and in biomedical imaging in time resolved diffuse tomography. In addition spatial filtering is popular in biomedical scenarios with structured illumination and confocal microscopy. We are presenting a combination analytical, computational, and experimental models that allow us develop and test imaging methods across scattering scenarios and scales. This framework will be used for proof of concept experiments to evaluate new computational imaging methods.
The application of nonline-of-sight (NLoS) vision and seeing around a corner has been demonstrated in the recent past on a laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the sensor’s direct field of view. A detailed knowledge about the scattering surfaces is necessary for the analysis. The authors evaluate the realization of dual-mode concepts with the aim of collecting all necessary information to enable both the direct three-dimensional imaging of a scene as well as the indirect sensing on hidden objects. Two different sensing approaches, laser gated viewing (LGV) and time-correlated single-photon counting, are investigated operating at laser wavelengths of 532 and 1545 nm, respectively. While LGV sensors have high spatial resolution, their application for NLoS sensing suffers from a low temporal resolution, i.e., a minimal gate width of 2 ns. On the other hand, Geiger-mode single-photon counting devices have high temporal resolution (250 ps), but the array size is limited to some thousand sensor elements. The authors present detailed theoretical and experimental evaluations of both sensing approaches.
The application of non-line of sight vision and see around a corner has been demonstrated in the recent past on laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the direct sensors field of view. Recent demonstrator systems were driven at laser wavelengths (800 nm and 532 nm) which are far from the eye-safe shortwave infrared (SWIR) wavelength band i.e. between 1.4 μm and 2 μm. Therefore, the application in public or inhabited areas is difficult with respect to international laser safety conventions. In the present work, the authors evaluate the application of recent eye safe laser sources and sensor devices for non-line of sight sensing and give predictions on range and resolution. Further, the realization of a dual mode concept is studied enabling both, the direct view on a scene and the indirect view on a hidden scene. While recent laser gated viewing sensors have high spatial resolution, their application in non-line of sight imaging suffer from a too low temporal resolution due to minimal sensor gate width of around 150 ns. On the other hand, Geiger-mode single photon counting devices have high temporal resolution, but their spatial resolution is (until now) limited to array sizes of some thousand sensor elements. In this publication the authors present detailed theoretical and experimental evaluations.
The application of non-line-of-sight vision has been demonstrated in the recent past on laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the direct sensor field of view. In the present work, the authors evaluate the application of recent single photon counting devices for non-line-of-sight sensing and give predictions on range and resolution. Further, the realization of a concept is studied enabling the indirect view on a hidden scene. Different approaches based on ICCD and GM-APD or SPAD sensor technologies are reviewed. Recent laser gated viewing sensors have a minimal temporal resolution of around 2 ns due to sensor gate widths. Single photon counting devices have higher sensitivity and higher temporal resolution.
We discuss new approaches to analyze laser-gated viewing data for nonline-of-sight vision with a frame-to-frame back-projection as well as feature selection algorithms. Although first back-projection approaches use time transients for each pixel, our method has the ability to calculate the projection of imaging data on the voxel space for each frame. Further, different data analysis algorithms and their sequential application were studied with the aim of identifying and selecting signals from different target positions. A slight modification of commonly used filters leads to a powerful selection of local maximum values. It is demonstrated that the choice of the filter has an impact on the selectivity i.e., multiple target detection as well as on the localization precision.
In the present paper, we discuss new approaches to analyze laser gated viewing data for non-line-of-sight vision with a novel frame-to-frame back projection as well as feature selection algorithms. While first back projection approaches use time transients for each pixel, our new method has the ability to calculate the projection of imaging data on the obscured voxel space for each frame. Further, four different data analysis algorithms were studied with the aim to identify and select signals from different target positions. A slight modification of commonly used filters leads to powerful selection of local maximum values. It is demonstrated that the choice of the filter has impact on the selectivity i.e. multiple target detection as well as on the localization precision.
Endoscope cameras play an important and growing role as a diagnostic and surgical tool. The endoscope camera is
usually used to provide a view of the scene straight ahead of the instrument to the operator. As is common in many
remotely operated systems, the limited field of view and the inability to pan the camera make it challenging to gain a
situational awareness comparable to an operator with direct access to the scene. We present a spectral multiplexing
technique for endoscopes that allows for overlay of the existing forward view with additional views at different angles to
increase the effective field of view of the device. Our goal is to provide peripheral vision while minimally affecting the
design and forward image quality of existing systems.
Laser gated viewing is a prominent sensing technology for optical imaging in harsh environments and can be applied for vision through fog, smoke, and other degraded environmental conditions as well as for the vision through sea water in submarine operation. A direct imaging of nonscattered photons (or ballistic photons) is limited in range and performance by the free optical path length, i.e., the length in which a photon can propagate without interaction with scattering particles or object surfaces. The imaging and analysis of scattered photons can overcome these classical limitations and it is possible to realize a nonline-of-sight imaging. The spatial and temporal distributions of scattered photons can be analyzed by means of computational optics and their information of the scenario can be restored. In particular, the information outside the line of sight or outside the visibility range is of high interest. We demonstrate nonline-of-sight imaging with a laser gated viewing system and different illumination concepts (point and surface scattering sources).
Laser Gated Viewing is a prominent sensing technology for optical imaging in harsh environments and can be applied to the vision through fog, smoke and other degraded environmental conditions as well as to the vision through sea water in submarine operation. A direct imaging of non-scattered photons (or ballistic photons) is limited in range and performance by the free optical path length i.e. the length in which a photon can propagate without interaction with scattering particles or object surfaces. The imaging and analysis of scattered photons can overcome these classical limitations and it is possible to realize a non-line-of-sight imaging. The spatial and temporal distribution of scattered photons can be analyzed by means of computational optics and their information of the scenario can be restored. In the case of Lambertian scattering sources the scattered photons carry information of the complete environment. Especial the information outside the line of sight or outside the visibility range is of high interest. Here, we discuss approaches for non line of sight active imaging with different indirect and direct illumination concepts (point, surface and volume scattering sources).