Translator Disclaimer
1 November 2019 Resolution limit of label-free far-field microscopy
Author Affiliations +

The Abbe diffraction limit, which relates the maximum optical resolution to the numerical aperture of the lenses involved and the optical wavelength, is generally considered a practical limit that cannot be overcome with conventional imaging systems. However, it does not represent a fundamental limit to optical resolution, as demonstrated by several new imaging techniques that prove the possibility of finding the subwavelength information from the far field of an optical image. These include super-resolution fluorescence microscopy, imaging systems that use new data processing algorithms to obtain dramatically improved resolution, and the use of super-oscillating metamaterial lenses. This raises the key question of whether there is in fact a fundamental limit to the optical resolution, as opposed to practical limitations due to noise and imperfections, and if so then what it is. We derive the fundamental limit to the resolution of optical imaging and demonstrate that while a limit to the resolution of a fundamental nature does exist, contrary to the conventional wisdom it is neither exactly equal to nor necessarily close to Abbe’s estimate. Furthermore, our approach to imaging resolution, which combines the tools from the physics of wave phenomena and the methods of information theory, is general and can be extended beyond optical microscopy, e.g., to geophysical and ultrasound imaging.



High-resolution optical imaging holds the key to the understanding of fundamental microscopic processes both in nature and in artificial systems—from the charge carrier dynamics in electronic nanocircuits1 to the biological activity in cellular structures.2 However, optical diffraction prevents the “squeezing” of light into dimensions much smaller than its wavelength,3 leading to the celebrated Abbe diffraction limit.47 This does not allow a straightforward extension of the conventional optical microscopy to the direct imaging of such subwavelength structures as cell membranes, individual viruses, or large protein molecules. As a result, recent decades have seen an increasing interest in developing “super-resolution” optical methods that allow to overcome this diffraction barrier—i.e., near-field optical microscopy,8 structured illumination imaging,9 metamaterials-based super-resolution,10 two-photon luminescence and stimulated emission-depletion microscopy,11 stochastic optical reconstruction imaging,12 and photoactivated localization microscopy.13

In particular, there is an increasing demand for the approach to optical imaging that is inherently label-free and does not rely on fluorescence, operates on the sample that is in the far field of all elements of the imaging system, and offers resolution comparable to that of fluorescent microscopy. Although seemingly a tall order, this task has recently found two possible solutions that approach the problem from the “hardware” and “algorithmic” sides, respectively. The former approach relies on the phenomenon of “super-oscillations”—where the band-limited function can and—when properly designed—does oscillate faster than its fastest Fourier component. The super-oscillatory lenses that implement this behavior have been designed and fabricated,14,15 and optical resolution exceeding the conventional Abbe limit has been demonstrated in experiment.14 The second approach relies on methods of processing the “diffraction-limited” data, taking full advantage of the fact that actual targets (and especially biological samples) are often inherently sparse.3 The resulting resolution improvement beyond the Abbe limit, due to this improved data processing, has been demonstrated both in numerical simulations and in experiment.1618

Far-field optical resolution beyond the Abbe limit in a scattering rather than fluorescence-based approach, observed in Refs. 1415., clearly demonstrates that Abbe’s bound of half-wavelength (and its quarter-wavelength counterpart for structured illumination) is not a fundamental limit for optical imaging. This raises the key question of whether there is in fact a fundamental bound to the optical resolution—as opposed to practical limitations due to detector noise, imaging system imperfections, data processing time limits in the case when image reconstruction corresponds to an NP-complete problem, etc. Furthermore, the knowledge of the corresponding fundamental limit, if such exists, and the physical mechanism behind it would help find the way to the system that offers the optimal performance—just as deeper understanding of thermodynamics and Carnot’s limit helped the design of practical heat engines.

In this work, we show that there is in fact a fundamental limit on the resolution of far-field optical imaging, which is however much less stringent than Abbe’s criterion. The presence of any finite amount of noise in the system, regardless of how small its intensity, leads to a fundamental limit on the optical resolution, which can be expressed in the form of an effective uncertainty relation. This limit has an essential information-theoretical nature and can be connected to the Shannon’s theory of information transmission in linear systems.20


Definition of the Resolution Limit

We define the diffraction limit Δ as the shortest spatial scale of the object whose geometry can still be reconstructed, error-free, from the far-field optical measurements in the presence of noise. (Although the concept of error-free information recovery in the presence of noise may sound surprising, it lies in the heart of modern computer networks where terabytes of data are transferred error-free over noisy transmission lines.) Without loss of generality, one can then assume that the object is composed of an arbitrary number of point scatterers of arbitrary amplitudes located at the nodes of the grid with the period Δ, as any additional structure in the sources (or scatterers) or variations in position will add to the information that needs to be recovered from far-field measurement for the successful reconstruction of the geometry of the object. (For a given illumination field, each point scatterer can be treated as an effective point source.)

Furthermore, the essential “lower bound” nature of Δ further allows to reduce the problem to that of an effectively one-dimensional target (formed by line, rather than point, sources)—since, as was already known to André21 and Rayleigh,22 line sources are more easily resolvable than point sources.

To calculate the fundamental resolution limit, it is therefore sufficient to consider the model system of an array of line “sources” of arbitrary (including zero) amplitudes, located at the node points of the grid with the period Δ [see Fig. 1(a)]. Note that in terms of the information that is detected in the far field and the information that is necessary and sufficient for the target reconstruction, this problem is identical to that of a step mask where thickness and/or permittivity changes at the nodes of the same grid by the amounts proportional to the amplitudes of the corresponding line sources (as the point source distribution corresponds to the spatial derivative of the mask “profile”) [see Fig. 1(b)].

Fig. 1

The schematic representation of the imaging set-up, for the object formed by (a) an array of small particles/lines and (b) a (binary) mask. D labels the position of a (coherent) detector, L is the size of the object (and equivalently the imaging aperture), and R is the distance from the object to the detector; in the far field RL.


Note that the reduction of the original problem to that of an effectively one-dimensional profile is not a simplification for the sake of convenience or reduction of the mathematical complexity. It is exactly this “digitized” one-dimensional profile that corresponds to the smallest “resolvable” spatial scale among all objects with a low bound on their spatial variations and therefore defines the fundamental resolution limit. Furthermore, in many cases, the actual object is formed by two (or more) materials that form sharp interfaces. In this case, the step mask that is equivalent to our point source model offers an adequate representation of the actual target.

However, even within the original framework of “resolving” two point sources,22 the result clearly depends on the difference of their amplitudes—with increasing disparity between the two leading to progressively worse “resolution.” The “ultimate” resolution limit Δ, therefore, corresponds to the case of identical point sources (or subwavelength scatterers), which are present only in an (unknown) fraction of the grid nodes. Note that such a digital mask corresponds to the common case of a pattern formed by a single material (e.g., the surrounding air) [see Fig. 1(b)].

When the distance to the detector R is much larger than the aperture L, RL (see Fig. 1), for the far-field signal detected in the given polarization and in the direction defined by the wavevector k (see Fig. 1 and Sec. 7):

Eq. (1)

where E0 is the incident field “illuminating” the target, i is the (integer) index that labels the (point) scatterers with the corresponding polarizabilities αi, ρi(xi,yi), k(kx,ky) is the wavevector with the magnitude |k|=ω/ck0, ω is the light frequency, and c is the speed of light (in the medium surrounding the target). Here n(k) corresponds to the effective noise, which includes the contributions from all origins (detector dark currents, illumination field fluctuations, etc.). Using data for imaging with different electromagnetic field polarizations, the effective noise can be correspondingly reduced.

Equivalently, for the case of the object in the form of a (dielectric) mask [see Fig. 1(b)], we obtain

Eq. (2)

where Δϵ is the difference between the dielectric permittivities of the object and the background.

Note that Eqs. (1) and (2) are linear in Δϵ and α (see Sec. 7), which physically correspond to the limit when multiple scattering is weak. Although this is generally the case in optical imaging of low-contrast media, secondary waves due to multiple light scattering can be intentionally induced by an a priori known high-contrast grating placed in the near field of the object.2325 Such grating-assisted microscopy offers a substantial improvement of imaging resolution well beyond what is expected for conventional far-field imaging.2325

The model of Eq. (1) or its equivalent Eq. (2) assumes coherent detection of the electromagnetic field in the far zone. This is essential for the definition of the fundamental resolution limit, as the phase information is in fact available in the far field and can be measured even with an intensity only sensitive detector using optical heterodyne approach26 so that any failure to obtain the corresponding information in a given experimental setup cannot be attributed to the fundamental resolution limit of optical imaging.

Finally, for the calculation of the fundamental resolution limit Δ, we must assume the large aperture limit k0L1. Although the case of a small aperture k0L1 can be easily implemented in the actual experimental setup (albeit at the cost of dramatic reduction in the field of view), the aperture in a close proximity to the object represents an example of a near-field probe, and this setup cannot be treated as a true far-field imaging.


Information-Theoretical Framework

To derive the fundamental limit on the resolution of optical imaging, we calculate the total amount of information about the object that can be recovered in the far field. As Eq. (1) can be interpreted as the input (E0)–output (s) relation of a linear information channel, the amount of the actual information carried from the object to the far-field detector, can be calculated using the standard methods of the information theory.20 The resolution limit then follows from the requirement of the recovered information T being sufficient to reconstruct the target:

Eq. (3)


When the object is composed of M different materials (or is formed by an array of point sources with M different levels of amplitude), additional information is needed for its reconstruction, which leads to a more stringent bound on the spatial resolution:

Eq. (4)


The actual transmitted information T can be obtained from the mutual information functional20

Eq. (5)


Here the entropy H[{s}] is the measure of the information received at the detector array:

Eq. (6)

where P[s(k)] is the distribution function of the output signal s(k), and the functional integral Ds(k) is defined in the standard way [see Eq. (22) in Sec. 9].

However, as the system is noisy, for any output signal, there is some uncertainty of what was the originating field scattered by the mask. The conditional entropy H[{s}|E0] at the detector array for a given E0 represents this uncertainty:

Eq. (7)


Substituting the resulting analytical expressions for H[{s}] and H[{s}|E0] (see Sec. 9) into the mutual information T in Eq. (5), for the resolution limit in the case of uniform illumination (see Sec. 10 for resolution limit in the regime of structured illumination), we obtain

Eq. (8)

which in the appropriate limits is consistent with the results of the earlier information-theoretical studies of Refs. 2728.29.30. Here SNR is the effective signal-to-noise ratio measured at the detector array:

Eq. (9)


Eq. (10)

represents the relative contribution of the absorption in the target; for a transparent object [Im(α)=0], we have η=0. The correction O(1/k0L) accounts for the finite size of the imaging aperture and can be neglected for k0L1.



Although Eq. (8) allows for an unlimited resolution in a noise-free environment, even a relatively low noise dramatically alters this picture. With the weak logarithmic dependence of the resolution limit on the SNR, to reduce the resolution limit by a factor of ten, the SNR needs to be increased by nearly five orders of magnitude.

At the same time, the spatial resolution limit ΔM depends on the effective “uncertainty” in the range of permittivity variations in the object that is being imaged—the simpler is the structure of the target, the easier is the task of finding its geometry. The ultimate value Δ is then achieved in the case of a binary mask (i.e., the object that is formed by only two materials) and represents the fundamental bound to the resolution. In the case of a higher complexity in the composition of the target, the actual resolution limit ΔM is well above Δ. When the number of materials (with their corresponding permittivities) that the object is composed of, M, is known a priori, the corresponding resolution limit ΔM is defined by Eq. (4). However, when no a priori information whatsoever is available, the limit to the resolution can be expressed as the effective uncertainty relation, which offers the lower bound on the product of the scaled spatial resolution δx and the amplitude resolution δϵ. In the case when the object is composed of transparent materials [Im(ϵ)=0], we obtain

Eq. (11)

where F(t)1/log2(1/t), the scaled spatial resolution δx is defined as the ratio of Δ to the Abbe limit, and the scaled amplitude resolution corresponds to the uncertainty in the permittivity δϵ that is normalized to the difference between the smallest (ϵmin) and largest (ϵmax) permittivities in the object, δϵδϵ/(ϵmaxϵmin). For a binary mask, δϵ=(ϵmaxϵmin)/2 so that the scaled amplitude resolution δϵ=1/2 and F(δϵ)=1, which reduces the uncertainty relation, Eq. (11), to the fundamental limit Δ of Eq. (8).

For imaging with no a priori information, with the optimal data reconstruction algorithm, Eq. (11) represents a trade-off between the uncertainties in position and the amplitude of the recovered image. Note that, as follows from Eq. (11), spatial resolution at the Abbe limit corresponds to the relative amplitude uncertainty of at least 1/1+2SNR.

In the case of imaging a binary mask or a pattern of identical subwavelength particles, the actual resolution can reach the value of Δ, which for a high SNR can be substantially below the Abbe limit. For example, in the structured illumination setup with SNR106, we find Δλ/100. Although reaching all the way to this limit with the data obtained in the standard imaging setup may be highly nontrivial, a straightforward algorithm described below that implements the amplitude constraint, offers spatial resolution well below the Abbe limit (see Fig. 2).

Fig. 2

Super-resolution object reconstruction for a binary mask, from its coherently detected diffraction pattern in the far field. The inset shows the schematics of the object profile. The main panel plots the error probability in the recovered profile, as a function of the effective SNR. The data shown were obtained for 10,000 different realizations. The boundary separating the light-red and light-green background corresponds to the value of the SNR corresponding to Δ sufficient to resolve the λ/16 spacing (see the inset). The red arrow indicates the minimum value of SNR when the numerical reconstruction produces no errors.


In the algorithm whose performance is shown in Fig. 2, the subwavelength binary mask (see the inset in Fig. 2) is recovered from its (band-limited) Fourier spectrum measured in the far field, together with the constraint that limits its profile to only two values. Although a finite amount of noise in the far-field measurements inevitably leads to errors, with the increase of the effective SNR, the corresponding error probability Perr rapidly goes to zero. In particular, for the resolution of λ/16 in the example of Fig. 2, for the SNR beyond the value indicated by the red arrow, the numerical calculation with an ensemble of 10,000 different realizations showed no errors.

The light-red and light-green color backgrounds in Fig. 2 correspond to the parameter range that, respectively, violates and satisfies the fundamental resolution limit of Eq. (8). Note that the boundary separating these regimes corresponds to the SNR that is substantially less than the smallest value (shown by red arrow in Fig. 2) for the error-free performance in the data recovery—indicating that the reconstruction algorithm is far from optimal. Still, even with this performance, the example of Fig. 2 indicates that even a straightforward implementation of an a priori constraint on the object geometry (binary mask rather than an arbitrary profile) offers object reconstruction from diffraction-limited data with deep subwavelength resolution (four times below the Abbe limit in the example of Fig. 2).

Additional a priori information about the object further reduces the resolution limit of optical imaging. For different cases of a priori available information about the target, the case of sparse objects is particularly important, as this property is widespread in both natural and artificial systems.3 If the target is a priori known to be sparse, with the effective sparsity parameter β (which can be defined as the fraction of empty “slots” in the grid superimposed on the target), we find

Eq. (12)


For the numerical example studied in Ref. 16, with β0.03 and SNR102, the resolution limit Δ(β)0.025λ. Accurate numerical reconstruction of the features on the scale of λ/10 demonstrated in Ref. 16 is, therefore, fully consistent with the fundamental limit Δ(β).


Imaging with a Small Aperture

The explicit expression for the resolution limit in Eq. (8) is presented in the large amplitude limit k0L1. Although this corresponds to the most common regime of actual optical microscopy, using a small aperture that is comparable to the free space wavelength can offer its own advantages. The resulting effect on the resolution limit is accounted for by the (positive definite) term O(1/k0L) in Eq. (8), which further reduces Δ.

Note that this was precisely the regime where super-oscillation-based imaging was demonstrated in experiment, as the use of small aperture was essential to block the (exponentially) strong power side lobes. Although the resulting improvement of the resolution is consistent with the fundamental limit established in this work, our expressions Eqs. (4), (8), and (12) do not explicitly indicate the advantage of super-oscillation approach. This should be contrasted to the case of sparsity-based imaging where its key parameter β explicitly enters the resolution limit in Eq. (12).

Indeed, while the super-oscillation imaging does offer subwavelength resolution, this improvement is the general feature of all structured illumination methods optimized for small aperture (or equivalently for imaging small isolated objects) and is not limited to the super-oscillation approach. This behavior is illustrated in Fig. 3, where a subwavelength target [red pattern in the center of Fig. 3(a)] is illuminated by the Bessel beam propagating in the direction normal to the plane of the picture. The beam axis is “focused” to the center of the target [see Fig. 3(a)] so that the illuminating field within the aperture not only shows no super-oscillations but in fact does not oscillate at all—see the field profiles for different orders m of the illuminating Bessel beams in Fig. 3(b). Nevertheless, the standard data recovery algorithm clearly shows deep subwavelength resolution of λ/10 [see Fig. 3(c)], despite having no a priori information about the structure of the target.

Fig. 3

Super-resolution imaging of a subwavelength object, based on structured illumination with Bessel beams. (a) The “incident” Bessel beam of the order m=12 (shown in gray scale) focused at the center of the subwavelength object (red). (b) The Bessel beam profiles in the object plane, for different orders m=0 (red), 1 (orange), 2 (magenta), 3 (blue), 4 (cyan), and 5 (green). For a small distance from the center, the Bessel function of order m behaves as xm, so illumination with the Bessel beams of different orders effectively “projects” the target on the set {xm} for different values of m. As the latter form a complete basis set, this procedure allows high-resolution reconstruction of the original object profile, without any use of super-oscillations or subwavelength focusing. (c) The subwavelength object profile and its reconstruction with Bessel beam illumination. The object corresponds to the red line in (c). The reconstructed profiles are shown for the effective SNRs of 106 [blue line in (c)] and 104 [green line in (c)].


It should, however, be noted that the super-oscillation-based approach, when implemented to form a subwavelength focus spot that is used to scan the object,14,15 is naturally suitable for optical imaging limited to incoherent detection, which offers substantial practical advantages in the actual implementation of the system.



In conclusion, we have derived the fundamental resolution limit for far-field optical imaging and demonstrated that it is generally well below the standard half-the-wavelength estimate. Our results also apply to other methods that rely on wave propagation and scattering, e.g., geophysical and ultrasound imaging.


Appendix A: Imaging Model

In its most general setting, the problem of (optical) imaging is essentially the reconstruction of the object profile from scattering data. The formation of the desired image of the target can be achieved using “analog” or “digital” tools, with lenses and projection screens in the former case and computational reconstruction of the object pattern on a computer screen. If the structure of the object is represented by its dielectric permittivity profile ϵ(r), the scattered electric field at the given frequency ω is defined by the vectorial Lippmann–Schwinger equation:

Eq. (13)

where G0(k0|rr|) is the (dyadic) Green function for the medium surrounding the object, Δϵ(r)ϵ(r)ϵ0 is the difference between the permittivities of the object and of the surrounding medium, and k0ϵ0ω/c.

Alternatively, the object may be represented as a collection of small (subwavelength) particles, with the individual (tensor) polarizabilities αi, leading to

Eq. (14)


Note that these two formulations are essentially equivalent, as arbitrary dielectric permittivity profile can be expressed in terms of the electromagnetic response of a large group of small particles.31

Although Eqs. (13) and (14) are linear in the electrical field, when treated as inverse problems for the reconstruction of the unknown profile Δϵ(r) and the distribution αi from the given illumination field E0(r) and the scattering data for E(r), they are essentially nonlinear in Δϵ and α.32 Physically, this nonlinearity originates from multiple scattering effects within the object,33 when the actual field acting on the given object, E(r), in addition to the incident field E0, also includes the contributions from the “secondary” waves scattered by the order parts of the object. Although these multiple scattering corrections can be substantial in acoustic and microwave scattering,33 for optical imaging of low-contrast media, these are generally small.34 Note, however, that when substantially present these “secondary” waves due to multiple light scattering can have a profound effect on the imaging resolution33—as the subwavelength structure of the object now functions as a high-spatial frequency grating forming an effective structured illumination pattern.

In the language of scattering theory, the conventional optical imaging and microscopy corresponds to the limit of weakly scattering semitransparent objects, which neglects multiple scattering contributions. The resulting first-order Born approximation34 reduces the acting field in the integral of Eq. (13) and the sum of Eq. (14) to the (a priori known) illumination field E0, thus leading to a linear inverse problem.

The resulting expressions can be further simplified in the radiation zone, when the detectors are placed in the far-field (radiation zone) of the object, k0|rri|1, thus reducing Eq. (14) to

Eq. (15)


When the distance to the detector r is much larger than the aperture L, rL, for the far-field signal detected in the given polarization and the wavevector k (see Fig. 1), we find

Eq. (16)

where ρi(xi,yi), k(kx,ky) with the magnitude |k|=k0, and n is the noise in the corresponding detector (see Fig. 1).

Similarly, if the target is represented with the 2-D permittivity mask ϵ(x,y) (corresponding to the Motti projection35 of the actual 3-D permittivity of the object), we obtain

Eq. (17)



Appendix B: Information Entropy

The entropy H[s] offers a measure of the information received by the detector that returns the value of s and is a functional of the statistical distribution of s:

Eq. (18)


When s represents the scattered field detected in the imaging system, it is defined by the object structure and the illumination field profile. However, even in the absence of any stray light in the system, all detectors are inherently noisy. As a result, for a given detected signal, there will always be some uncertainty. This uncertainty is represented by the conditional information entropy H[s|o] of the detected signal for a given object, in terms of the conditional distribution p(s|o):

Eq. (19)


According to the Shannon’s fundamental result,20 the resulting information about the object is then given by the mutual information

Eq. (20)


When the imaging system measures the continuous spectrum s(k), the relevant entropies are defined by the functional integral:

Eq. (21)

where pp[s(k)] for the entropy H[s], and pp[s(k)] for the entropy H[s|o], and the functional integral is defined in the standard way:

Eq. (22)

where cM is the normalization constant.


Appendix C: Mutual Information

The mutual information T is defined20 as the difference between the information entropy at the “output” s(k) for the unconstraint “input,” H[{s}], and the information entropy H[{s}|E0α] of the output for fixed input E0(ρ)α(ρ) [see Eq. (16)]. For additive noise, the latter is simply equal to the noise entropy:

Eq. (23)

where Pn[n(k)] is the noise distribution function and reduces to

Eq. (24)

for uncorrelated Gaussian noise.

The unconditional output distribution Pn[s(k)] is defined by both the noise and the target profile distribution P[{α}]. Although the latter does not necessarily reduce to a simple functional form, every single output component s(k) corresponds to a sum of many such random variables [see Eq. (16)]. The central limit theorem then implies that the “output” statistics of s(k) is described by the correlated multivariate normal distribution. Changing the path integral variables in Eq. (23) using the orthogonal transformation that diagonalizes the corresponding covariance matrix, we obtain

Eq. (25)

where λ’s are the eigenvalues of the discrete prolate spheroidal Slepian matrix36 Sk1k2=S(k1k2), with |k|=k0, where for a one-dimensional target, S1(q)=sinqL2/sinqΔ2, and for a rectangular (square) aperture, S2(q)=S1(qx)S1(qy). The eigenvalue spectrum of the Slepian matrix has a characteristic step shape, showing k0L significant eigenvalues (λL/Δ) and remaining insignificant eigenvalues (λ0) separated by a narrow transition band.37,38 The eigenvalue sum in Eq. (25) can be, therefore, calculated analytically, which together with Eqs. (5) and (24) yields Eq. (8).


Appendix D: Resolution Limit for Structured Illumination

In the case of structured illumination, for the resolution limit, we obtain

Eq. (26)


Eq. (27)



This work was partially supported by the Gordon and Betty Moore Foundation.



E. Sakat et al., “Near-field imaging of free carriers in ZnO nanowires with a scanning probe tip made of heavily doped Germanium,” Phys. Rev. Appl., 8 054042 (2017). PRAHB2 2331-7019 Google Scholar


B. Herman and K. Jacobson, Optical Microscopy for Biology, 1st ed.Wiley, New York (1990). Google Scholar


J. W. Goodman, Introduction to Fourier Optics, 3rd ed.Roberts & Co., Eaglewood (2004). Google Scholar


J.-L. Lagrange, Sur une Loi generale d’Optique, Memoires de l’Academie, Berlin (1803). Google Scholar


H. von Helmholtz, “On the limits of the optical capacity of the microscope,” Proc. Bristol Nat. Soc., 1 435 (1874). Google Scholar


E. K. Abbe, “Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Arch. Mikrosk. Anat., 9 (1), 413 –468 (1873). Google Scholar


E. Abbe, “A contribution to the theory of the microscope, and the nature of microscopic vision,” Proc. Bristol Nat. Soc., 1 200 –261 (1874). Google Scholar


B. Hecht et al., “Scanning near-field optical microscopy with aperture probes: fundamentals and applications,” J. Chem. Phys., 112 (18), 7761 –7774 (2000). JCPSA6 0021-9606 Google Scholar


M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc., 198 (2), 82 –87 (2000). JMICAR 0022-2720 Google Scholar


X. Zhang and Z. Liu, “Superlenses to overcome the diffraction limit,” Nat. Mater., 7 435 –441 (2008). NMAACR 1476-1122 Google Scholar


F. Balzarotti et al., “Nanometer resolution imaging and tracking of fluorescent molecules with minimal photon fluxes,” Science, 355 (6325), 606 –612 (2017). SCIEAS 0036-8075 Google Scholar


M. J. Rust, M. Bates and X. Zhuang, “Sub diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods, 3 (20), 793 –796 (2006). 1548-7091 Google Scholar


E. Betzig et al., “Breaking the diffraction barrier: optical microscopy on a nanometric scale,” Science, 251 (5000), 1468 –1470 (1991). SCIEAS 0036-8075 Google Scholar


E. T. F. Rogers et al., “A super-oscillatory lens optical microscope for subwavelength imaging,” Nat. Mater., 11 432 –435 (2012). NMAACR 1476-1122 Google Scholar


G. H. Yuan, E. T. F. Rogers and N. I. Zheludev, “Achromatic super-oscillatory lenses with sub-wavelength focusing,” Light Sci. Appl., 6 e17036 (2017). Google Scholar


S. Gazit et al., “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express, 17 (16), 23920 –23946 (2009). OPEXFF 1094-4087 Google Scholar


A. Szameit et al., “Sparsity-based single-shot subwavelength coherent diffractive imaging,” Nat. Mater., 11 455 –459 (2012). NMAACR 1476-1122 Google Scholar


P. Sidorenko et al., “Sparsity-based super-resolved coherent diffraction imaging of one-dimensional objects,” Nat. Commun., 6 8209 (2015). NCAOBW 2041-1723 Google Scholar


F. M. Huang and N. I. Zheludev, “Super-resolution without evanescent waves,” Nano Lett., 9 1249 –1254 (2009). NALEFD 1530-6984 Google Scholar


C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J., 27 379 –423 (1948). BSTJAN 0005-8580 Google Scholar


M. C. André, “Étude de la Diffraction dans les Intruments d’Optique; son Influence sur les Observations Astronomiques,” Ann. de l’École Norm. Sup., 5 275 –354 (1876). Google Scholar


L. Rayleigh, “Investigations in optics, with special reference to the spectroscope,” Philos. Mag. J. Sci., 8 261 –274 (1879). MNRAA4 0035-8711 Google Scholar


A. Sentenac, P. C. Chaumet and K. Belkebir, “Beyond the Rayleigh criterion: grating assisted far-field optical diffraction tomography,” Phys. Rev. Lett., 97 243901 (2006). PRLTAO 0031-9007 Google Scholar


S. Inampudi, N. Kuhta and V. A. Podolskiy, “Interscale mixing microscopy: numerically stable imaging of wavelength-scale objects with sub-wavelength resolution and far field measurements,” Opt. Express, 23 (3), 2753 –2763 (2015). OPEXFF 1094-4087 Google Scholar


C. M. Roberts et al., “Interscale mixing microscopy: far-field imaging beyond the diffraction limit,” Optica, 3 (8), 803 –808 (2016). Google Scholar


F. Le Clerc, L. Collot and M. Gross, “Numerical heterodyne holography with two-dimensional photodetector arrays,” Opt. Lett., 25 (10), 716 –718 (2000). OPLEDP 0146-9592 Google Scholar


G. T. di Francia, “Resolving power and information,” J. Opt. Soc. Am., 45 (7), 497 –501 (1955). JOSAAH 0030-3941 Google Scholar


P. B. Fellgett and E. H. Linfoot, “On the assessment of optical images,” Philos. Trans. R. Soc. Ser. A, 247 369 –407 (1955). PTRMAD 1364-503X Google Scholar


N. J. Bershad, “Resolution, optical-channel capacity and information theory,” J. Opt. Soc. Am., 59 157 –163 (1969). JOSAAH 0030-3941 Google Scholar


E. L. Kosarev, “Shannons superresolution limit for signal recovery,” Inverse Prob., 6 55 –76 (1990). INPEEY 0266-5611 Google Scholar


B. T. Draine and P. J. Flatau, “Discrete dipole approximation for scattering calculations,” J. Opt. Soc. Am. A, 11 (4), 1491 –1499 (1994). JOAOD6 0740-3232 Google Scholar


W. C. Chew, Waves and Fields in Inhomogeneous Media, 2nd ed.IEEE Press, New York (1995). Google Scholar


T. J. Cui et al., “Study of resolution and super resolution in electromagnetic imaging for half-space problems,” IEEE Trans. Antennas Propag., 52 (6), 1398 –1411 (2004). IETPAK 0018-926X Google Scholar


M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed.Cambridge University Press, Cambridge (1999). Google Scholar


E. Narimanov, “Hyperstructured illumination,” ACS Photonics, 3 (6), 1090 –1094 (2016). Google Scholar


D. Slepian, “Prolate spheroidal wave functions, Fourier analysis and uncertainty. V: the discrete case,” Bell Syst. Tech. J., 57 (5), 1371 –1430 (1978). BSTJAN 0005-8580 Google Scholar


H. J. Landau, “On the eigenvalue behavior of certain convolution equations,” Trans. Am. Math. Soc., 115 242 –256 (1965). Google Scholar


D. Slepian and E. Sonnenblick, “Eigenvalues associated with prolate spheroidal wave functions of zero order,” Bell Syst. Tech. J., 44 (8), 1745 –1759 (1965). BSTJAN 0005-8580 Google Scholar


Evgenii Narimanov is a professor of electrical and computer engineering at Purdue University. He is a fellow of OSA and IEEE.

© The Author. Published by SPIE and CLP under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Evgenii Narimanov "Resolution limit of label-free far-field microscopy," Advanced Photonics 1(5), 056003 (1 November 2019).
Received: 17 September 2019; Accepted: 15 October 2019; Published: 1 November 2019


Correlation plenoptic imaging
Proceedings of SPIE (June 25 2017)
Low-power portable scanning imaging ladar system
Proceedings of SPIE (August 20 2003)
Beyond the lateral resolution limit by phase imaging
Proceedings of SPIE (February 22 2011)

Back to Top