The ability to form images of scenes hidden from direct view would be advantageous in many applications – from improved motion planning and collision avoidance in autonomous navigation to enhanced danger anticipation for first-responders in search-and-rescue missions. Recent techniques for imaging around corners have mostly relied on time-of-flight measurements of light propagation, necessitating the use of expensive, specialized optical systems. In this work, we demonstrate how to form images of hidden scenes from intensity-only measurements of the light reaching a visible surface from the hidden scene. Our approach exploits the penumbra cast by an opaque occluding object onto a visible surface. Specifically, we present a physical model that relates the measured photograph to the radiosity of the hidden scene and the visibility function due to the opaque occluder. For a given scene–occluder setup, we characterize the parts of the hidden region for which the physical model is well-conditioned for inversion – i.e., the computational field of view (CFOV) of the imaging system. This concept of CFOV is further verified through the Cram´er–Rao bound of the hidden-scene estimation problem. Finally, we present a two-step computational method for recovering the occluder and the scene behind it. We demonstrate the effectiveness of the proposed method using both synthetic and experimentally measured data.
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
Conventional depth imagers using time-of-flight methods collect hundreds to thousands of detected photons per pixel to form high-quality depth images of a scene. Through spatio-temporal regularization achieved with maximum a posteriori probability estimation under a scene prior and an inhomogeneous Poisson process likelihood function, we form depth images with dramatically higher photon efficiency even as low as one detected photon per pixel. Simulations demonstrate the combination of high accuracy and high photon efficiency of our method, compared to the traditional maximum likelihood estimate of the depth image and other popular denoising algorithms.
KEYWORDS: Signal to noise ratio, 3D acquisition, Calibration, Magnetic resonance imaging, Computer programming, Discrete wavelet transforms, Data acquisition, Convolution, Spatial resolution, Compressed sensing
To enable further acceleration of magnetic resonance (MR) imaging, compressed sensing (CS) is combined with
GRAPPA, a parallel imaging method, to reconstruct images from highly undersampled data with significantly
improved RMSE compared to reconstructions using GRAPPA alone. This novel combination of GRAPPA and
CS regularizes the GRAPPA kernel computation step using a simultaneous sparsity penalty function of the coil
images. This approach can be implemented by formulating the problem as the joint optimization of the least
squares fit of the kernel to the ACS lines and the sparsity of the images generated using GRAPPA with the
Conventional imaging uses steady-state illumination and light sensing with focusing optics; variations of the light
field with time are not exploited. We develop a signal processing framework for estimating the reflectance f of a
Lambertian planar surface in a known position using omnidirectional, time-varying illumination and unfocused,
time-resolved sensing in place of traditional optical elements such as lenses and mirrors. Our model associates
time sampling of the intensity of light incident at each sensor with a linear functional of f. The discrete-time
samples are processed to obtain ℓ2-regularized estimates of f. Using non-impulsive, bandlimited light sources
instead of impulsive illumination significantly improves signal-to-noise ratio (SNR) and reconstruction quality.
KEYWORDS: Signal to noise ratio, Detection and tracking algorithms, Modulation, Resistance, Interference (communication), Receivers, Feedback control, Signal detection, Space based lasers, Compressed sensing
This paper considers a simple on-off random multiple access channel, where n users communicate simultaneously
to a single receiver over m degrees of freedom. Each user transmits with probability λ, where typically λn<m(symbol)n, and the receiver must detect which users transmitted. We show that when the codebook has i.i.d.
Gaussian entries, detecting which users transmitted is mathematically equivalent to a certain sparsity detection
problem considered in compressed sensing. Using recent sparsity results, we derive upper and lower bounds
on the capacities of these channels. We show that common sparsity detection algorithms, such as lasso and
orthogonal matching pursuit (OMP), can be used as tractable multiuser detection schemes and have significantly
better performance than single-user detection. These methods do achieve some near-far resistance but-at high
signal-to-noise ratios (SNRs) - may achieve capacities far below optimal maximum likelihood detection. We then
present a new algorithm, called sequential OMP, that illustrates that iterative detection combined with power
ordering or power shaping can significantly improve the high SNR performance. Sequential OMP is analogous
to successive interference cancellation in the classic multiple access channel. Our results thereby provide insight
into the roles of power control and multiuser detection on random-access signaling.
If a signal x is known to have a sparse representation with respect to a frame, the signal can be estimated from a noise-corrupted observation y by finding the best sparse approximation to y. The ability to remove noise in this manner depends on the frame being designed to efficiently represent the signal while it inefficiently represents the noise. This paper analyzes the mean squared error (MSE) of this denoising scheme and the probability that the estimate has the same sparsity pattern as the original signal. Analyses are for dictionaries generated randomly according to a spherically-symmetric distribution. Easily-computed approximations for the probability of selecting the correct dictionary element and the MSE are given. In the limit of large dimension, these approximations have simple forms. The asymptotic expressions reveal a critical input signal-to-noise ratio (SNR) for signal recovery.
Wavelet thresholding is a powerful tool for denoising images and
other signals with sharp discontinuities. Using different wavelet
bases gives different results, and since the wavelet transform is
not time-invariant, thresholding various shifts of the signal is
one way to use different wavelet bases. This paper describes
several denoising methods that apply wavelet thresholding or
variations on wavelet thresholding recursively. (We previously
termed one of these methods "recursive cycle spinning.") These methods are compared experimentally for denoising piecewise
polynomial signals. Though similar, the methods differ in
computational complexity, convergence speed, and sensitivity to
Matching pursuit, introduced by Mallat and Zhang, is an algorithm for decomposing a signal into a linear combination of functions chosen from possibly redundant dictionary of functions. A variant which we call quantized matching pursuit has been proposed for various lossy compression problems. Here a simple dependent coding scheme is introduced to code the coefficients and indices in a quantized matching pursuit representation. The improvement in rate-distortion performance is shown through simulations on synthetic sources. The resulting systems is used to code still images and motion-compensated video residual images. Since a DCT-basis dictionary is used, the multiplicative computational complexity is equal to that of traditional transform coding. The image coding results are ambiguous, with a very slight increase in PSNR but no discernible subjective improvement. The video coding results are more promising, with bit rate reductions of up to 20 percent comparing at constant SNR. The competitive performance and design flexibility indicate that the method warrants further investigation.
SC203: Wavelets and Applications: State-of-the-Art
In this course, a comprehensive presentation of discrete and continuous wavelets, filter banks and subband coding, and multiresolution signal processing, is given. Techniques recently developed in different fields have converged to a unified theory. Wavelets provide an interesting alternative to Fourier transform methods. This course explains the successive approximation or multiresolution essential to wavelets and subhand coding: a signal can be seen as a coarse version plus added details.