Open Access
29 July 2015 Can compressed sensing beat the Nyquist sampling rate?
Author Affiliations +
Abstract
The data saving capability of “compressed sensing (sampling)” in signal discretization is disputed and found to be far below the theoretical upper bound defined by the signal sparsity. It is demonstrated on a simple and intuitive example, that, in a realistic scenario for signals that are believed to be sparse, one can achieve a substantially larger saving than compressing sensing can. It is also shown that frequent assertions in the literature that “compressed sensing” can beat the Nyquist sampling approach are misleading substitutions of terms and are rooted in misinterpretation of the sampling theory.

This letter is motivated by recent OPN publications1,2 that advertise wide use in optical sensing of “compressed sensing” (CS), a new method of digital image formation that has obtained considerable attention after publication.37 This attention is driven by such assertions in numerous publications as “CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use,”6 “beating Nyquist with light,”1,8,9 and such. For those who are familiar with sampling theory and know that the Nyquist rate cannot be beaten, these assertions sound questionable. Is that true that “compressed sensing guarantees on reconstruction quality even when sampling is very far from Nyquist sampling,” and, if yes, how much can one win in terms of reducing the sampling rate? In what follows we attempt to answer these questions.

Compressed sensing assumes signal approximation by their “sparse” copies, i.e., signals whose spectrum in a selected transform has only a few nonzero components. According to this approach, one should specify the total number N of signal samples required for its discrete representation and the number M<N of certain measurements to be done in order to obtain, by means of signal L1 norm minimization in a selected transform domain, signal sparse discrete representation with N samples, and certain K<M nonzero spectral components. The ratio CSDRF=N/M (CS dimensionality reduction factor) is the degree of the achieved signal dimensionality reduction. The ratio SPRS=K/N of the number K of nonzero transform coefficients to the total number N of signal samples is called the signal sparsity.

The theoretical upper bound of signal dimensionality reduction that can be achieved by means of signal “sparse” approximation can be evaluated using the discrete sampling theorem, according to which if a signal of N samples has only K nonzero components of its spectrum in a certain transform, the minimal number of its samples sufficient for its perfect restoration is K.10 For such transforms as discrete Fourier and discrete cosine transform (DCT), positions of these K samples can be arbitrary.

Therefore, the inverse of the signal sparsity SPRS=K/N is the theoretical upper bound of signal dimensionality reduction achievable by means of signal sparse approximation. This relationship is plotted in Fig. 1 (solid line).

Fig. 1

Signal dimensionality reduction factor versus spectrum sparsity: theoretical upper bound (solid), theoretical (dash-dot) and experimental (dot) upper bounds for compressive sensing and its estimate for random sampling and band-limited reconstruction (dash).

OE_54_7_079701_f001.png

The upper bound of signal dimensionality reduction CSDRF of signals with sparsity SSP achievable by CS can be evaluated from the relationship: 1/CSDRF>C(2SSPlogCSDRF) provided in Ref. 7, where C>1 and K<MN. This relationship is used to plot two curves in Fig. 1, SSP<1/(2CSDRFlogCSDRF+1) (dash-dot line) and SSP<1/(3.5CSDRFlogCSDRF+1) (dot line), that asymptotically, for K<MN, tend toward it for values 1 and 1.75 of the multiplier C, correspondingly, and from the other side, satisfy the natural requirement that, for K=N there must be M=N. The first curve can be considered as a theoretical upper bound of the signal dimensionality reduction capability of CS. The second curve better fits experimental data, shown by diamonds in Fig. 1, of experimental values of CS dimensionality reduction factors found in the literature and listed, along with the corresponding sources, in Table 1. One can regard it as an experimental upper bound.

Table 1

Experimental data on signal dimensionality reduction achieved by using compressed sensing methods.

SourceSpectrum sparsityDimensionality reduction factor
Ref. 6, Fig. 30.1252
Ref. 6, Fig. 10.023810.92
Ref. 1, p. 480.0004533
Ref. 110.02385.67
Ref. 110.02388.56
Ref. 120.09922.52
Ref. 13, p. 10.083.33
Ref. 13, p. 20.1461.73

As one can see, CS requires a substantially redundant number of data with respect to the theoretical bound defined by the inverse to signal sparsity. One can numerically evaluate the degree of the redundancy from plots in Fig. 2 of ratios of this bound and CS theoretical (dash-dot line) and experimental (dot line) bounds versus signal spectrum sparsity K/N.

Fig. 2

Sampling redundancy of compressed sensing and of random sampling and band-limited reconstruction versus image sparsity.

OE_54_7_079701_f002.png

This sampling redundancy of CS is the price one should pay for the uncertainty regarding the indices of signal nonzero spectral components: the CS approach assumes the belief that signals can be approximated by their “sparse” copies, but does not assume any specification of positions of signal nonzero spectral components.

However, this total uncertainty is a too pessimistic scenario. If one believes that an image can be satisfactorily approximated by its sparse copy in a certain transform, as a rule, one knows, at least roughly, from energy compaction properties of the transform where image nonzero transform coefficients are expected to be concentrated. Even such a vague knowledge can greatly help.

For instance, for an overwhelming number of real images, appropriate transforms such as DCT compact the image energy into the lower frequency part of spectral components. It is this property that is put in the base of transform coefficient zonal quantization tables in transform image coding such as JPEG. Therefore, one can, in addition to specifying the number N of desired images samples and the number M of samples to be taken, which is required by the CS approach, make a natural assumption that the image spectral components important for image reconstruction are concentrated within, say, a circular shape that encompasses M spectral components with the lowest indices. With this assumption, one can either sample the image in a regular way with a sampling rate defined by dimensions of a square that circumscribes this shape or, more efficiently in terms of reducing the required number of samples, reconstruct an image sparse approximation from a set of M samples taken, in the case of sparsity of DCT or DFT spectra, in randomly chosen positions. For the reconstruction, an iterative Gershberg–Papoulis type algorithm can be employed.10 This option [let us call it random sampling and band limited reconstruction (RSBLR)], is illustrated in Fig. 3 on an example of a test image “Ango” from a set of 11 test images listed in Table 2.

Fig. 3

(a) Test image of 256×256pixels, (b) its 6141 samples taken in randomly selected positions, (c) a map (white dots) of the 3277 most intensive components of its DCT spectrum, which reconstruct the image with the same RMS error of 1.25 gray levels as that of its JPEG reconstruction error, and the border (white line) of the low-pass filter that encompasses 6029 DCT spectral components with the lowest indices; (d) the result of the band-limited reconstruction with an RMS error of 1.46 gray levels from the sparse random samples.

OE_54_7_079701_f003.png

Table 2

Experimental data on image dimensionality reduction achieved by using random sampling and band limited reconstruction (RSBLR) method for a set of test images.

Test imageSpectrum sparsity K/NRSBLR Dimensionality reduction factor N/MRMS error: RSBLR reconstructionRMS error: JPEG reconstruction
Mamm0.044111.581.48
Ango0.0510.51.361.25
Test4CS0.895.552.151.62
Moon0.10552.552.5
Lena5120.1933.33.9
Aerial photo0.234.764.5
Man0.2272.384.124
ManSprse60.02416.75.6-
Multiphoton0.2382.264.734.89
Barbara5120.2651.614.153.91
Westconcord0.3011.927.487.21

The values of image dimensionality reduction N/M found in experiments with these images are plotted in Fig. 1 (bold circles) along with a curve (0.8+Sparsity)/(1.8*Sparsity) that fits them sufficiently well (dash line). The solid line in Fig. 2 represents an estimate of the sampling redundancy of RSBLR obtained as a ratio of the sparse approximation dimensionality redundancy theoretical upper bound (solid line in Fig. 1) to the fitting curve (0.8+Sparsity)/(1.8*Sparsity). As the sparsity of images can be specified only for a certain quality of image sparse approximation, it was opted to use the quality of standard JPEG image compression as an acceptable quality of image approximation, i.e., the sparsity of test image spectra (second column of Table 2) was evaluated in these experiments as a fraction of the most intensive spectral image DCT coefficients which are sufficient for image reconstruction with the same RMS reconstruction error as that for the standard JPEG compression of the corresponding test image (fifth column).

As one can see from Fig. 2, random sampling and band limited reconstruction substantially outperform CS in its signal dimensionality reduction efficiency practically in the entire range of values of possible image sparsities.

Note that in less common cases, when one can believe that the image nonzero spectral coefficients are concentrated not only in low frequencies but in a few disjoint areas within the spectral domain, this method can be employed as well. Again, even a vague knowledge regarding positions of image nonzero spectral components, which is associated with this belief, can give a substantial savings in the number of required image samples compared to the totally “blind” restoration using the CS approach.

Consider now assertions that CS can “beat the Nyquist sampling approach.”1 The CS technique can, in principle, restore signals with a few spectral components within the base-band defined by the component of the highest frequency from their samples taken with a rate lower than twice this frequency. This, however, certainly does not mean that it beats the Nyquist sampling. The reason is very simple: twice the component’s highest frequency is not the minimal sampling rate for such signals. According to the sampling theory, the minimal sampling rate is defined by the total area occupied by the signal spectral components in the transform domain. The optimal sampling of signals that contain few spectral components is sub-band sampling and requires signal sinusoidal modulation-demodulation in order to shift signal high frequency sub-bands to a low frequency band before sampling and then to shift them back for signal reconstruction. This fundamental result of the sampling theory dates back to 1950 to 1960 and is addressed in many publications and textbooks. Among the most recent ones are Refs. 14 and 15.

CS replaces signal sinusoidal modulation-demodulation by signal blind modulation-demodulation using pseudo-random masks, but pays quite a high price of substantial redundancy in the required number of samples. For instance, in the experiment of sampling and reconstruction of a high frequency sinusoidal signal presented in Ref. 1, the required redundancy (the ratio M/K in denotations of the paper) is 1/0.01567 times. Note that no analysis concerning the accuracy of signal high-frequency sinusoidal components restoration and possible aliasing artifacts is provided in that publication, as well as in others that are similar.

To conclude, the theoretical estimates and experimental data presented above show that assertions that CS methods enable a large reduction in the sampling costs and surpass the traditional limits of sampling theory are quite exaggerated, misleading, and grounded in misinterpretation of the sampling theory.

References

1. 

F. Bucholtz and J. M. Nichols, “Compressed sensing demystified,” Opt. Photonics News, 25 44 –49 (2014). http://dx.doi.org/10.1364/OPN.25.10.000044 OPPHEL 1047-6938 Google Scholar

2. 

Harnessing the Potential of Compressed Sensing,” Optic and Photonics News, 21 –23 (2014). Google Scholar

3. 

E. Candès, “Compressed sampling,” in Proc. Int. Congress of Math., (2006). Google Scholar

4. 

D. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory, 52 (4), 1289 –1306 (2006). http://dx.doi.org/10.1109/TIT.2006.871582 IETTAW 0018-9448 Google Scholar

5. 

E. Candès, J. Romberg and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math., 59 (8), 1207 (2006). http://dx.doi.org/10.1002/cpa.20124 Google Scholar

6. 

E. Candès and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., 25 (2), 21 –30 (2008). http://dx.doi.org/10.1109/MSP.2007.914731 ISPRE6 1053-5888 Google Scholar

7. 

D. L. Donoho and J. Tanner, “Exponential bounds implying construction of compressed sensing matrices, error-correcting codes, and neighborly polytopes by random sampling,” IEEE Trans. Inf. Theory, 56 2002 (2010). http://dx.doi.org/10.1109/TIT.2010.2040892 IETTAW 0018-9448 Google Scholar

8. 

R. Willett, R. Marcia and J. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” Opt. Eng., 50 (7), 072601 (2011). http://dx.doi.org/10.1117/1.3596602 Google Scholar

9. 

J. M. Nichols and F. Bucholtz, “Beating Nyquist with light: a compressedly sampled photonic link,” Opt. Express, 19 7339 (2011). http://dx.doi.org/10.1364/OE.19.007339 OPEXFF 1094-4087 Google Scholar

10. 

L. P. Yaroslavsky et al., “Nonuniform sampling, image recovery from sparse data and the discrete sampling theorem,” J. Opt. Soc. Am. A, 26 566 –575 (2009). http://dx.doi.org/10.1364/JOSAA.26.000566 JOAOD6 0740-3232 Google Scholar

11. 

Y. Rivenson and A. Stern, “Compressed imaging with a separable sensing operator, compressed imaging with a separable sensing operator,” Signal Process. Lett., 16 449 –452 (2009). http://dx.doi.org/10.1109/LSP.2009.2017817 Google Scholar

12. 

D. Baron et al., “Distributed Compressed Sensing,” http://www.ecs.umass.edu/~mduarte/images/DCS_Asilomar_Nov2.pdf Google Scholar

13. 

D. L. Donoho and J. Tanner, “Precise undersampling theorems,” Proc. IEEE, 98 (6), 913 –924 (2010). http://dx.doi.org/10.1109/JPROC.2010.2045630 IEEPAD 0018-9219 Google Scholar

14. 

Y. M. Lu and M. N. Do, “A theory for sampling signals from a union of subspaces,” IEEE Trans. Signal Process., 56 (6), 2334 –2345 (2008). http://dx.doi.org/10.1109/TSP.2007.914346 Google Scholar

15. 

L. Yaroslavsky, Theoretical Foundations of Digital Imaging, CRC Press, Roca Baton, London, New York (2013). Google Scholar
© 2015 Society of Photo-Optical Instrumentation Engineers (SPIE)
Leonid P. Yaroslavsky "Can compressed sensing beat the Nyquist sampling rate?," Optical Engineering 54(7), 079701 (29 July 2015). https://doi.org/10.1117/1.OE.54.7.079701
Published: 29 July 2015
Lens.org Logo
CITATIONS
Cited by 8 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Compressed sensing

Image compression

Image quality

Image restoration

Solids

Error analysis

Modulation

Back to Top