We consider the problem of estimating a group sparse vector x ∈ Rn under a generalized linear measurement model. Group sparsity of x means the activity of different components of the vector occurs in groups - a feature common in estimation problems in image processing, simultaneous sparse approximation and feature selection with grouped variables. Unfortunately, many current group sparse estimation methods require that the groups are non-overlapping. This work considers problems with what we call generalized group sparsity where the activity of the different components of x are modeled as functions of a small number of boolean latent variables. We show that this model can incorporate a large class of overlapping group sparse problems including problems in sparse multivariable polynomial regression and gene expression analysis. To estimate vectors with such group sparse structures, the paper proposes to use a recently-developed hybrid generalized approximate message passing (HyGAMP) method. Approximate message passing (AMP) refers to a class of algorithms based on Gaussian and quadratic approximations of loopy belief propagation for estimation of random vectors under linear measurements. The HyGAMP method extends the AMP framework to incorporate priors on x described by graphical models of which generalized group sparsity is a special case. We show that the HyGAMP algorithm is computationally efficient, general and offers superior performance in certain synthetic data test cases.
KEYWORDS: Signal to noise ratio, Detection and tracking algorithms, Modulation, Resistance, Interference (communication), Receivers, Feedback control, Signal detection, Space based lasers, Compressed sensing
This paper considers a simple on-off random multiple access channel, where n users communicate simultaneously
to a single receiver over m degrees of freedom. Each user transmits with probability λ, where typically λn<m(symbol)n, and the receiver must detect which users transmitted. We show that when the codebook has i.i.d.
Gaussian entries, detecting which users transmitted is mathematically equivalent to a certain sparsity detection
problem considered in compressed sensing. Using recent sparsity results, we derive upper and lower bounds
on the capacities of these channels. We show that common sparsity detection algorithms, such as lasso and
orthogonal matching pursuit (OMP), can be used as tractable multiuser detection schemes and have significantly
better performance than single-user detection. These methods do achieve some near-far resistance but-at high
signal-to-noise ratios (SNRs) - may achieve capacities far below optimal maximum likelihood detection. We then
present a new algorithm, called sequential OMP, that illustrates that iterative detection combined with power
ordering or power shaping can significantly improve the high SNR performance. Sequential OMP is analogous
to successive interference cancellation in the classic multiple access channel. Our results thereby provide insight
into the roles of power control and multiuser detection on random-access signaling.
If a signal x is known to have a sparse representation with respect to a frame, the signal can be estimated from a noise-corrupted observation y by finding the best sparse approximation to y. The ability to remove noise in this manner depends on the frame being designed to efficiently represent the signal while it inefficiently represents the noise. This paper analyzes the mean squared error (MSE) of this denoising scheme and the probability that the estimate has the same sparsity pattern as the original signal. Analyses are for dictionaries generated randomly according to a spherically-symmetric distribution. Easily-computed approximations for the probability of selecting the correct dictionary element and the MSE are given. In the limit of large dimension, these approximations have simple forms. The asymptotic expressions reveal a critical input signal-to-noise ratio (SNR) for signal recovery.
A subspace-based method for denoising with a frame works as follows:
If a signal is known to have a sparse representation with respect to
the frame, the signal can be estimated from a noise-corrupted observation of the signal by finding the best sparse approximation to the observation. The ability to remove noise in this manner depends on the frame being designed to efficiently represent the signal while it inefficiently represents the noise. This paper gives bounds to show how inefficiently white Gaussian noise is represented by sparse linear combinations of frame vectors. The bounds hold for any frame so they are generally loose for frames designed to represent structured signals. Nevertheless, the bounds can be combined with knowledge of the approximation efficiency of a given family of frames for a given signal class to study the merit of frame redundancy for denoising.
Wavelet thresholding is a powerful tool for denoising images and
other signals with sharp discontinuities. Using different wavelet
bases gives different results, and since the wavelet transform is
not time-invariant, thresholding various shifts of the signal is
one way to use different wavelet bases. This paper describes
several denoising methods that apply wavelet thresholding or
variations on wavelet thresholding recursively. (We previously
termed one of these methods "recursive cycle spinning.") These methods are compared experimentally for denoising piecewise
polynomial signals. Though similar, the methods differ in
computational complexity, convergence speed, and sensitivity to