We present a study that uses machine learning to solve the forward and inverse scattering problems for synthetic aperture radar (SAR). Using a training set of known reflectivities as inputs and the resulting SAR measurements as outputs, the machine learning method produces an approximation for the sensing matrix of the forward scattering problem. Conversely, employing that same training set but with the SAR measurements used as inputs and the reflectivities as outputs, the machine learning method produces an approximate inverse of the sensing matrix. This learned approximate inverse mapping allows us to solve the inverse scattering problem as it maps SAR measurements to an estimate of the reflectivity. To interpret these results, we restrict our attention to a neural network arranged as a single, fully-connected layer. By doing so, we are able to interpret and evaluate the mappings produced by machine learning in addition to the results of those mappings. Employing a training set made up of 50,000 of the CIFAR-10 dataset as the reflectivities, we simulate SAR measurements using a physical model for the sensing matrix. With this training set of reflectivities and corresponding SAR measurements, we find that the results of machine learning accurately approximate the sensing matrix and provide a better answer to the inverse scattering problem than the standard SAR inversion formula. We also test the performance of the proposed methodology on a dataset with high resolution images while training with a lower resolution data set. The results are very promising showing again a superior performance for the learned approximate inverse mapping.
Parameter tuning is an important but often overlooked step in signal recovery problems. For instance, the regularization parameter in compressed sensing dictates the sparsity of the approximate signal reconstruction. More recently, there has been evidence that non-convex ℓp quasi-norm minimization, where 0 < p < 1, leads to an improvement in reconstruction over existing models that use convex regularization. However, these methods rely on good estimates of the value of not only p (the choice of norm) but also on the value of the penalty regularization parameter. This paper describes a method for choosing suitable parameters. The method involves creating a score to determine the effectiveness of the choice of parameters by partially reconstructing the signal. We then efficiently search through different combinations of parameters using a pattern search approach that exploits parallelism and asynchronicity to find the pair with the optimal score. We demonstrate the efficiency and accuracy of the proposed method through numerical experiments.
In many signal recovery applications, measurement data is comprised of multiple signals observed concurrently. For instance, in multiplexed imaging, several scene subimages are sensed simultaneously using a single detector. This technique allows for a wider field-of-view without requiring a larger focal plane array. However, the resulting measurement is a superposition of multiple images that must be separated into distinct components. In this paper, we explore deep neural network architectures for this image disambiguation process. In particular, we investigate how existing training data can be leveraged and improve performance. We demonstrate the effectiveness of our proposed methods on numerical experiments using the MNIST dataset.
Reconstructing high-dimensional sparse signals from low-dimensional low-count photon observations is a challenging nonlinear optimization problem. In this paper, we build upon previous work on minimizing the Poisson log-likelihood and incorporate recent work on the generalized nonconvex Shannon entropy function for promoting sparsity in solutions. We explore the effectiveness of the proposed approach using numerical experiments.
In this paper, we solve the ℓ2-ℓ1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time.
Proc. SPIE. 8165, Unconventional Imaging, Wavefront Sensing, and Adaptive Coded Aperture Imaging and Non-Imaging Sensor Systems
KEYWORDS: Imaging systems, Video, Fourier transforms, Video compression, Optical flow, Reconstruction algorithms, Coded apertures, Motion models, Motion measurement, Simulation of CCA and DLA aggregates
This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated
video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e.
salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform
reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher
reconstruction fidelity in the vicinity of this salient motion.
The emerging field of compressed sensing has potentially powerful implications for the design of optical imaging devices. In particular, compressed sensing theory suggests that one can recover a scene at a higher resolution than is dictated by the pitch of the focal plane array. This rather remarkable result comes with some important caveats however, especially when practical issues associated with physical implementation are taken into account. This tutorial discusses compressed sensing in the context of optical imaging devices, emphasizing the practical hurdles related to building such devices, and offering suggestions for overcoming these hurdles. Examples and analysis specifically related to infrared imaging highlight the challenges associated with large format focal plane arrays and how these challenges can be mitigated using compressed sensing ideas.
Traditionally, optical sensors have been designed to collect the most directly interpretable and intuitive measurements possible.
However, recent advances in the fields of image reconstruction, inverse problems, and compressed sensing indicate
that substantial performance gains may be possible in many contexts via computational methods. In particular, by designing
optical sensors to deliberately collect "incoherent" measurements of a scene, we can use sophisticated computational
methods to infer more information about critical scene structure and content. In this paper, we explore the potential of
physically realizable systems for acquiring such measurements. Specifically, we describe how given a fixed size focal
plane array, compressive measurements using coded apertures combined with sophisticated optimization algorithms can
significantly increase image quality and resolution.
The observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot
be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As
a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be
accomplished by minimizing a conventional l2-l1 objective function. The problem addressed in this paper is the estimation
of f* from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number
of observations and (b) f* admits a sparse representation. The optimization formulation considered in this paper uses a
negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally
nonnegative). This paper describes computational methods for solving the constrained sparse Poisson inverse problem.
In particular, the proposed approach incorporates key ideas of using quadratic separable approximations to the objective
function at each iteration and computationally efficient partition-based multiscale estimation methods.
KEYWORDS: Staring arrays, Imaging systems, Cameras, Video, Fourier transforms, Video compression, Coded apertures, Simulation of CCA and DLA aggregates, Distributed interactive simulations, Compressed sensing
Nonlinear image reconstruction based upon sparse representations of images has recently received widespread attention
with the emerging framework of compressed sensing (CS). This theory indicates that, when feasible, judicious selection
of the type of distortion induced by measurement systems may dramatically improve our ability to perform image reconstruction.
However, applying compressed sensing theory to practical imaging systems poses a key challenge: physical
constraints typically make it infeasible to actually measure many of the random projections described in the literature, and
therefore, innovative and sophisticated imaging systems must be carefully designed to effectively exploit CS theory. In
video settings, the performance of an imaging system is characterized by both pixel resolution and field of view. In this
work, we propose compressive imaging techniques for improving the performance of video imaging systems in the presence
of constraints on the focal plane array size. In particular, we describe a novel yet practical approach that combines
coded aperture imaging to enhance pixel resolution with superimposing subframes of a scene onto a single focal plane
array to increase field of view. Specifically, the proposed method superimposes coded observations and uses wavelet-based
sparsity recovery algorithms to reconstruct the original subframes. We demonstrate the effectiveness of this approach by
reconstructing with high resolution the constituent images of a video sequence.