We introduce a new algorithm for multichannel blind deconvolution. Given the outputs of K linear time-
invariant channels driven by a common source, we wish to recover their impulse responses without knowledge of the source signal. Abstractly, this problem amounts to finding a solution to an overdetermined system of quadratic equations. We show how we can recast the problem as solving a system of underdetermined linear equations with a rank constraint. Recent results in the area of low rank recovery have shown that there are effective convex relaxations to problems of this type that are also scalable computationally, allowing us to recover 100s of channel responses after a moderate observation time. We illustrate the effectiveness of our methodology with a numerical simulation of a passive noise imaging" experiment.
Undersea localization requires a computationally expensive partial differential equation simulation to test each
candidate hypothesis location via matched filter. We propose a method of batch testing that effectively yields
a test sequence output of random combinations of location-specific matched filter correlations, such that the
computational run time varies with the number of tests instead of the number of locations. We show that by
finding the most likely location that could have accounted for these batch test outputs, we are able to perform
almost as well as if we had computed each location's matched filter. In particular, we show that we can reliably
resolve the target's location up to the resolution of incoherence using only logarithmically many measurements
when the number of candidate locations is less than the dimension of the matched filter. In this way, our random
mask pattern not only performs substantially the same as cleverly designed deterministic masks in classical batch
testing scenarios, but also naturally extends to other scenarios when the design of such deterministic masks may
be less obvious.
We consider the problem of estimating the channel response between multiple source receiver pairs all sensing the
same medium. A different pulse is sent from each source, and the response is measured at each receiver. If each
source sends its pulse while the others are silent, estimating the channel is a classical deconvolution problem.
If the sources transmit simultaneously, estimating the channel requires "inverting" an underdetermined system
of equations. In this paper, we show how this second scenario relates to the theory of compressed sensing. In
particular, if the pulses are long and random, then the channel matrix will be a restricted isometry, and we
can apply the tools of compressed sensing to simultaneously recover the channels from each source to a single
receiver.
The Dantzig selector is a near ideal estimator for recovery of sparse signals from linear measurements in the
presence of noise. It is a convex optimization problem which can be recast into a linear program (LP) for real
data, and solved using some LP solver. In this paper we present an alternative approach to solve the Dantzig
selector which we call "Primal Dual pursuit" or "PD pursuit". It is a homotopy continuation based algorithm,
which iteratively computes the solution of Dantzig selector for a series of relaxed problems. At each step the
previous solution is updated using the optimality conditions defined by the Dantzig selector. We will also discuss
an extension of PD pursuit which can quickly update the solution for Dantzig selector when new measurements
are added to the system. We will present the derivation and working details of these algorithms.
The theory of compressive sensing enables accurate and robust signal reconstruction from a number of measurements
dictated by the signal's structure rather than its Fourier bandwidth. A key element of the theory is
the role played by randomization. In particular, signals that are compressible in the time or space domain can
be recovered from just a few randomly chosen Fourier coefficients. However, in some scenarios we can only observe
the magnitude of the Fourier coefficients and not their phase. In this paper, we study the magnitude-only
compressive sensing problem and in parallel with the existing theory derive sufficient conditions for accurate
recovery. We also propose a new iterative recovery algorithm and study its performance. In the process, we
develop a new algorithm for the phase retrieval problem that exploits a signal's compressibility rather than its
support to recover it from Fourier transform magnitude measurements.
We consider two natural extensions of the standard (lower case script "L")1 minimization framework for compressive sampling. The
two methods, one based on penalizing second-order derivatives and one based on the redundant wavelet transform,
can also be viewed as variational methods which extend the basic total-variation recovery program. A numerical
example illustrates that these methods tend to produce smoother images than the standard recovery programs.
A widespread problem in the applied sciences is to recover an object of interest from a limited number of measurements. Recently, a series of exciting results have shown that it is possible to recover sparse (or approximately sparse) signals with high accuracy from a surprisingly small number of such measurements. The recovery procedure consists of solving a tractable convex program. Moreover, the procedure is robust to measurement error; adding a perturbation of size ε to the measurements will not induce a recovery error of more than a small constant times ε. In this paper, we will briefly overview these results, describe how stable recovery via convex optimization can be implemented in an efficient manner, and present some numerical results illustrating the practicality of the procedure.
Can we recover a signal f∈RN from a small number of linear measurements? A series of recent papers developed a collection of results showing that it is surprisingly possible to reconstruct certain types of signals accurately from limited measurements. In a nutshell, suppose that f is compressible in the sense that it is well-approximated by a linear combination of M vectors taken from a known basis Ψ. Then not knowing anything in advance about the signal, f can (very nearly) be recovered from about M log N generic nonadaptive measurements only. The recovery procedure is concrete and consists in solving a simple convex optimization program.
In this paper, we show that these ideas are of practical significance. Inspired by theoretical developments, we propose a series of practical recovery procedures and test them on a series of signals and images which are known to be well approximated in wavelet bases. We demonstrate that it is empirically possible to recover an object from about 3M-5M projections onto generically chosen vectors with an accuracy which is as good as that obtained by the ideal M-term wavelet approximation. We briefly discuss possible implications in the areas of data compression and medical imaging.
Natural images can be viewed as combinations of smooth regions,
textures, and geometry. Wavelet-based image coders, such as the
space-frequency quantization (SFQ) algorithm, provide reasonably
efficient representations for smooth regions (using zerotrees, for
example) and textures (using scalar quantization) but do not properly
exploit the geometric regularity imposed on wavelet coefficients by
features such as edges. In this paper, we develop a representation for
wavelet coefficients in geometric regions based on the wedgelet
dictionary, a collection of geometric atoms that construct
piecewise-linear approximations to contours. Our wedgeprint
representation implicitly models the coherency among geometric
wavelet coefficients. We demonstrate that a simple compression
algorithm combining wedgeprints with zerotrees and scalar quantization
can achieve near-optimal rate-distortion performance D(R) ~ (log R)2/R2 for the class of piecewise-smooth images containing smooth C2 regions separated by smooth C2 discontinuities. Finally, we extend this simple algorithm and propose a complete compression framework for natural images using a rate-distortion criterion to balance the three representations. Our Wedgelet-SFQ (WSFQ) coder outperforms SFQ in terms of visual quality and mean-square error.
In the last few years, it has become apparent that traditional wavelet-based image processing algorithms and models have significant shortcomings in their treatment of edge contours. The standard modeling paradigm exploits the fact that wavelet coefficients
representing smooth regions in images tend to have small magnitude, and that the multiscale nature of the wavelet transform implies that these small coefficients will persist across scale (the canonical
example is the venerable zero-tree coder). The edge contours in the image, however, cause more and more large magnitude wavelet coefficients as we move down through scale to finer resolutions. But if the contours are smooth, they become simple as we zoom in on them, and are well approximated by straight lines at fine scales. Standard wavelet models exploit the grayscale regularity of the smooth regions of the image, but not the geometric regularity of the contours.
In this paper, we build a model that accounts for this geometric regularity by capturing the dependencies between complex wavelet coefficients along a contour. The Geometric Hidden Markov Tree (GHMT) assigns each wavelet coefficient (or spatial cluster of wavelet
coefficients) a hidden state corresponding to a linear approximation of the local contour structure. The shift and rotational-invariance properties of the complex wavelet transform allow the GHMT to model
the behavior of each coefficient given the presence of a linear edge at a specified orientation --- the behavior of the wavelet coefficient given the state. By connecting the states together in a quadtree, the GHMT ties together wavelet coefficients along a contour, and also models how the contour itself behaves across scale.
We demonstrate the effectiveness of the model by applying it to feature extraction.
Since their introduction a little more than 10 years ago, wavelets
have revolutionized image processing. Wavelet based
algorithms define the state-of-the-art for applications
including image coding (JPEG2000), restoration, and segmentation.
Despite their success, wavelets have significant shortcomings in their
treatment of edges. Wavelets do not parsimoniously capture even the
simplest geometrical structure in images, and wavelet based processing
algorithms often produce images with ringing around the edges.
As a first step towards accounting for this structure, we will show
how to explicitly capture the geometric regularity of contours in
cartoon images using the wedgelet representation and a multiscale geometry model. The wedgelet representation builds up an image out of simple piecewise constant functions with linear discontinuities. We will show how the geometry model, by putting a joint distribution on the orientations of the linear discontinuities, allows us to weigh several factors when choosing the wedgelet representation: the error between the representation and the original image, the parsimony of the representation, and whether the wedgelets in the representation form "natural" geometrical structures. Finally, we will analyze a simple wedgelet coder based on these principles, and show that it has optimal asymptotic performance for simple cartoon images.
Beginning with an observed document image and a model of how the image has been degraded, Document Image Decoding recognizes printed text by attempting to find a most probable path through a hypothesized Markov source. The incorporation of linguistic constraints, which are expressed by a sequential predictive probabilistic language model, can improve recognition accuracy significantly in the case of moderately to severely corrupted documents. Two methods of incorporating linguistic constraints in the best-path search are described, analyzed and compared. The first, called the iterated complete path algorithm, involves iteratively rescoring complete paths using conditional language model probability distributions of increasing order, expanding state only as necessary with each iteration. A property of this approach is that it results in a solution that is exactly optimal with respect to the specified source, degradation, and language models; no approximation is necessary. The second approach considered is the Stack algorithm, which is often used in speech recognition and in the decoding of convolutional codes. Experimental results are presented in which text line images that have been corrupted in a known way are recognized using both the ICP and Stack algorithms. This controlled experimental setting preserves many of the essential features and challenges of real text line decoding, while highlighting the important algorithmic issues.
We develop a general framework to simultaneously exploit texture and shape characterization in multiscale image segmentation. By posing multiscale segmentation as a model selection problem, we invoke the powerful framework offered by minimum description length (MDL). This framework dictates that multiscale segmentation comprises multiscale texture characterization and multiscale shape coding. Analysis of current multiscale maximum a posteriori segmentation algorithms reveals that these algorithms implicitly use a shape coder with the aim to estimate the optimal MDL solution, but find only an approximate solution.
Multiscale processing, in particular using the wavelet transform, has emerged as an incredibly effective paradigm for signal processing and analysis. In this paper, we discuss a close relative of the Haar wavelet transform, the multiscale multiplicative decomposition. While the Haar transform captures the differences between signal approximations at different scales, the multiplicative decomposition captures their ratio. The multiplicative decomposition has many of the properties that have made wavelets so successful. Most notably, the multipliers are a sparse representative for smooth signals, they have a dependency structure similar to wavelet coefficients, and they can be calculated efficiently. The multiplicative decomposition is also a more natural signal representation than the wavelet transform for some problems. For example, it is extremely easy to incorporate positivity constraints into multiplier domain processing. In addition, there is a close relationship between the multiplicative decomposition and the Poisson process; a fact that has been exploited in the field of photon-limited imaging. In this paper, we will show that the multiplicative decomposition is also closely tied with the Kullback-Leibler distance between two signals. This allows us to derive an n-term KL approximation scheme using the multiplicative decomposition.
Wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree model captures the key features of the joint density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the need for computationally expensive iterative training. In this paper, we prose two reduced-parameter HMT models that capture the general structure of a broad class of real-world images. In the image HMT (iHMT) model we use the fact that for a large class of images the structure of the HMT is self-similar across scale. This allows us to reduce the complexity of the iHMT to just nine easily trained parameters. In the universal HMT (uHMT) we take a Bayesian approach and fix these nine parameters. The uHMT requires no training of any kind. While simple, we show using a series of image estimation/denoising experiments that these two new models retain nearly all of the key structure modeled by the full HMT. Finally, we propose a fast shift-invariant HMT estimation algorithm that outperforms all other wavelet- based estimators in the current literature, both in mean- square error and visual metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.