We present two new closed-form methods for mixed pixel/multipath interference separation in AMCW lidar
systems. The mixed pixel/multipath interference problem arises from the violation of a standard range-imaging
assumption that each pixel integrates over only a single, discrete backscattering source. While a numerical
inversion method has previously been proposed, no close-form inverses have previously been posited. The first
new method models reflectivity as a Cauchy distribution over range and uses four measurements at different
modulation frequencies to determine the amplitude, phase and reflectivity distribution of up to two component
returns within each pixel. The second new method uses attenuation ratios to determine the amplitude and phase
of up to two component returns within each pixel. The methods are tested on both simulated and real data and
shown to produce a significant improvement in overall error. While this paper focusses on the AMCW mixed
pixel/multipath interference problem, the algorithms contained herein have applicability to the reconstruction of
a sparse one dimensional signal from an extremely limited number of discrete samples of its Fourier transform.
Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single
viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path
problem, which causes range distortions when stray light interferes with the range measurement in a given pixel.
Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but
enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating
the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only
been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this
paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator
unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization
approach, rather than relying on the processing provided by the manufacturer, to determine the individual component
returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.
We present a new two-stage method for parametric spatially variant blind deconvolution of full-field Amplitude Modulated Continuous Wave lidar image pairs taken at different aperture settings subject to limited depth of field. A Maximum Likelihood based focal parameter determination algorithm uses range information to reblur the image taken with a smaller aperture size to match the large aperture image. This allows estimation of focal parameters without prior calibration of the optical setup and produces blur estimates which have better spatial resolution and less noise than previous depth from defocus (DFD) blur measurement algorithms. We compare blur estimates from the focal parameter determination method to those from Pentland's DFD method, Subbarao's S-Transform method and estimates from range data/the sampled point spread function. In a second stage the estimated focal parameters are applied to deconvolution of total integrated intensity lidar images improving depth of field. We give an example of application to complex domain lidar images and discuss the trade-off between recovered amplitude texture and sharp range estimates.
We present two novel Poisson noise Maximum Likelihood based methods for identifying the individual returns
within mixed pixels for Amplitude Modulated Continuous Wave rangers. These methods use the convolutional
relationship between signal returns and the recorded data to determine the number, range and intensity of returns
within a pixel. One method relies on a continuous piecewise truncated-triangle model for the beat waveform
and the other on linear interpolation between translated versions of a sampled waveform. In the single return
case both methods provide an improvement in ranging precision over standard Fourier transform based methods
and a decrease in overall error in almost every case. We find that it is possible to discriminate between two
light sources within a pixel, but local minima and scattered light have a significant impact on ranging precision.
Discrimination of two returns requires the ability to take samples at less than 90 phase shifts.