Most of compressed sensing (CS) theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. Despite the abundance of work utilizing incoherent sensing matrices, for this type of tolerant recovery we suggest that coherence is actually beneficial . We promote the use of coherent sampling when tolerant support recovery is acceptable, and demonstrate its advantages empirically. In addition, we provide a first step towards theoretical analysis by considering a specific reconstruction method for selected signal classes.
We study matrix completion with non-uniform, deterministic sampling patterns. We introduce a computable parameter, which is a function of the sampling pattern, and show that if this parameter is small, then we may recover missing entries of the matrix, with appropriate weights. We theoretically analyze a simple and well-known recovery method, which simply projects the (zero-padded) subsampled matrix onto the set of low-rank matrices. We show that under non-uniform deterministic sampling, this method yields a biased solution, and we propose an algorithm to de-bias it. Numerical simulations demonstrate that de-biasing significantly improves the estimate. However, when the observations are noisy, the error of this method can be sub-optimal when the sampling is highly non-uniform. To remedy this, we suggest an alternative which is based on projection onto the max-norm ball whose robustness to noise tolerates arbitrarily non-uniform sampling. Finally, we analyze convex optimization in this framework.
Low-rank matrix recovery addresses the problem of recovering an unknown low-rank matrix from few linear
measurements. Nuclear-norm minimization is a tractable approach with a recent surge of strong theoretical
backing. Analagous to the theory of compressed sensing, these results have required random measurements.
For example, m ≥ Cnr Gaussian measurements are sufficient to recover any rank-r n x n matrix with high
probability. In this paper we address the theoretical question of how many measurements are needed via any
method whatsoever - tractable or not. We show that for a family of random measurement ensembles, m ≥ 4nr-4r2 measurements are sufficient to guarantee that no rank-2r matrix lies in the null space of the measurement
operator with probability one. This is a necessary and sufficient condition to ensure uniform recovery of all rank-r
matrices by rank minimization. Furthermore, this value of m precisely matches the dimension of the manifold
of all rank-2r matrices. We also prove that for a fixed rank-r matrix, m ≥ 2nr - r2 + 1 random measurements
are enough to guarantee recovery using rank minimization. These results give a benchmark to which we may
compare the efficacy of nuclear-norm minimization.
KEYWORDS: Matrices, Reconstruction algorithms, Iterative methods, Computer programming, Interference (communication), Signal generators, Bridges, Transparency, Compressed sensing, Signal to noise ratio
This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set
of linear measurements - L1-minimization methods and iterative methods (Matching Pursuits). We find a simple
regularized version of the Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the
speed and transparency of OMP and the strong uniform guarantees of the L1-minimization. Our algorithm
ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruction is
exact provided the linear measurements satisfy the Uniform Uncertainty Principle. In the case of inaccurate
measurements and approximately sparse signals, the noise level of the recovery is proportional to &sqrt;log n parallel e parallel 2
where e is the error vector.