This paper considers a random component-wise variant of the unnormalized power method, which is similar to the regular power iteration except that only a random subset of indices is updated in each iteration. For the case of normal matrices, it was previously shown that random component-wise updates converge in the mean-squared sense to an eigenvector of eigenvalue 1 of the underlying matrix even in the case of the matrix having spectral radius larger than unity. In addition to the enlarged convergence regions, this study shows that the eigenvalue gap does not directly affect the convergence rate of the randomized updates unlike the regular power method. In particular, it is shown that the rate of convergence is affected by the phase of the eigenvalues in the case of random component-wise updates, and the randomized updates favor negative eigenvalues over positive ones. As an application, this study considers a reformulation of the component-wise updates revealing a randomized algorithm that is proven to converge to the dominant left and right singular vectors of a normalized data matrix. The algorithm is also extended to handle large-scale distributed data when computing an arbitrary rank approximation of an arbitrary data matrix. Numerical simulations verify the convergence of the proposed algorithms under different parameter settings.
Two-photon calcium imaging can be used to monitor the activity of thousands of neurons across multiple brain areas at single-cell resolution. To harness the power of this imaging technology, neuroscientists require algorithms to detect from the imaging data the time points at which each neuron was active. We present an algorithm based on Finite Rate of Innovation (FRI) theory to detect neuronal spiking activity from this data. By exploiting the parametric structure of the signal, the activity detection problem can be reduced to the classic FRI problem of reconstructing a stream of Diracs.
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
The notion of a graph wavelet gives rise to more advanced processing of data on graphs due to its ability to operate in a localized manner, across newly arising data-dependency structures, with respect to the graph signal and underlying graph structure, thereby taking into consideration the inherent geometry of the data. In this work, we tackle the problem of creating graph wavelet filterbanks on circulant graphs for a sparse representation of certain classes of graph signals. The underlying graph can hereby be data-driven as well as fixed, for applications including image processing and social network theory, whereby clusters can be modelled as circulant graphs, respectively. We present a set of novel graph wavelet filter-bank constructions, which annihilate higher-order polynomial graph signals (up to a border effect) defined on the vertices of undirected, circulant graphs, and are localised in the vertex domain. We give preliminary results on their performance for non-linear graph signal approximation and denoising. Furthermore, we provide extensions to our previously developed segmentation-inspired graph wavelet framework for non-linear image approximation, by incorporating notions of smoothness and vanishing moments, which further improve performance compared to traditional methods.
In the last few years, several new methods have been developed for the sampling and the exact reconstruction of specific classes of non-bandlimited signals known as signals with finite rate of innovation (FRI). This is achieved by using adequate sampling kernels and reconstruction schemes. An important class of such kernels is the one made of functions able to reproduce exponentials.
In this paper we review a new strategy for sampling these signals which is universal in that it works with
any kernel. We do so by noting that meeting the exact exponential reproduction condition is too stringent
a constraint, we thus allow for a controlled error in the reproduction formula in order to use the exponential reproduction idea with any kernel and develop a reconstruction method which is more robust to noise.
We also present a novel method that is able to reconstruct infinite streams of Diracs, even in high noise
scenarios. We sequentially process the discrete samples and output locations and amplitudes of the Diracs in real-time. In this context we also show that we can achieve a high reconstruction accuracy of 1000 Diracs for SNRs as low as 5dB.
In this paper, we propose two multiview image compression methods. The basic concept of both schemes is
the layer-based representation, in which the captured three-dimensional (3D) scene is partitioned into layers
each related to a constant depth in the scene. The first algorithm is a centralized scheme where each layer is
de-correlated using a separable multi-dimensional wavelet transform applied across the viewpoint and spatial
dimensions. The transform is modified to efficiently deal with occlusions and disparity variations for different
depths. Although the method achieves a high compression rate, the joint encoding approach requires the transmission
of all data to the users. By contrast, in an interactive setting, the users request only a subset of the
captured images, but in an unknown order a priori. We address this scenario in the second algorithm using
Distributed Source Coding (DSC) principles which reduces the inter-view redundancy and facilitates random
access at the image level. We demonstrate that the proposed centralized and interactive methods outperform
H.264/MVC and JPEG 2000, respectively.
The standard separable two-dimensional (2-D) wavelet transform (WT) has recently achieved a great success
in image processing because it provides a sparse representation of smooth images. However, it fails to capture
efficiently one-dimensional (1-D) discontinuities, like edges or contours. These features, being elongated and
characterized by geometrical regularity along different directions, intersect and generate many large magnitude
wavelet coefficients. Since contours are very important elements in visual perception of images, to provide a
good visual quality of compressed images, it is fundamental to preserve good reconstruction of these directional
features. We propose a construction of <i>critically sampled perfect reconstruction</i> transforms with directional
<i>vanishing moments</i> (DVMs) imposed in the corresponding basis functions along different directions, called <i>directionlets</i>.
We also demonstrate the outperforming non-linear approximation (NLA) results achieved by our transforms and we show how to design and implement a novel efficient space-frequency quantization (SFQ) compression algorithm using directionlets. Our new compression method beats the standard SFQ both in terms of mean-square-error (MSE) and visual quality, especially in the low-rate compression regime. We also show that our compression method, does not increase the order of computational complexity as compared to the standard SFQ algorithm.
This paper proposes a new approach to distributed video coding. Distributed video coding is a new paradigm in video coding, which is based on the concept of decoding with side information at the decoder. Such a coding scheme employs a low-complexity encoder, making it well suited for low-power devices such as mobile video cameras.
The uniqueness of our work lies in the combined use of discrete wavelet transform (DWT) and the concept of sampling of signals with finite rate of innnovation (FRI). This enables the decoder to retrieve the motion parameters and reconstruct the video sequence from the low-resolution version of each transmitted frame. Unlike the currently existing practical coders, we do not employ traditional channel coding techniqe. For a simple video sequence with a fixed background, Our preliminary results show that the proposed coding scheme can achieve a better PSNR than JPEG2000-intraframe coding at low bit rates.
Recently, it was shown that it is possible to sample classes of signals with finite rate of innovation. These sampling schemes, however, use kernels with infinite support and this leads to complex and instable reconstruction algorithms. In this paper, we show that many signals with finite rate of innovation can be sampled and perfectly reconstructed using kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes any function satisfying Strang-Fix conditions, Exponential Splines and functions with rational Fourier transforms. Our sampling schemes can be used for either 1-D or 2-D signals with finite rate of innovation.
In this paper, we consider classes of not bandlimited signals, namely, streams of Diracs and piecewise polynomial signals, and show that these signals can be sampled and perfectly reconstructed using wavelets as sampling kernel. Due to the multiresolution structure of the wavelet transform, these new sampling theorems naturally lead to the development of a new resolution enhancement algorithm based on
wavelet footprints. Preliminary results show that this algorithm is
also very resilient to noise.
The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show interesting gains compared to the standard two-dimensional analysis.
The application of the wavelet transform in image processing
is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions.
Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show very interesting gains compared to the standard two-dimensional analysis.
In recent years wavelet have had an important impact on signal processing theory and practice. The effectiveness of wavelets is mainly due to their capability of representing piecewise smooth signals with few non-zero coefficients. Away from discontinuities, the inner product between a wavelet and a smooth function will be either zero or very small. At singular points, a finite number of wavelets concentrated around the discontinuity lead to non-zero inner products. This ability of wavelet transform to pack the main signal information in few large coefficients is behind the success of wavelet based denoising algorithms. Indeed, traditional approaches simply consist in thresholding the noisy wavelet coefficients, so the few large coefficients carrying the essential information are usually kept while small coefficients mainly containing, so the few large coefficients carrying the essential information are usually kept while small coefficients mainly containing noise are canceled. However, wavelet denoising suffers of two main drawbacks: it is not shift-invariant and it exhibits pseudo Gibbs phenomenon around discontinuities.