Resolution Limit of Label-free Far-field Microscopy

We derive the fundamental limit to the resolution of far-field optical imaging, and demonstrate that, while a bound to the resolution of a fundamental nature does exit, contrary to the conventional wisdom it is neither exactly equal to nor necessarily close to Abbe's estimate. Our approach to imaging resolution that combines the tools from the physics of wave phenomena and the methods of information theory, is general, and can be extended beyond optical microscopy, to e.g. geophysical and ultrasound imaging.

High resolution optical imaging holds the key to the understanding of fundamental microscopic processes both in nature and in artificial systems -from the charge carrier dynamics in electronic nano-circuits [13] to the biological activity in cellular structures. [14] However, optical diffraction that prevents the "squeezing" of light into the dimensions much smaller than its wavelength, does not allow a straightforward extension of the conventional optical microscopy to the direct imaging of such subwavelength structures as cell membranes, individual viruses or large protein molecules. As a result, recent decades have seen increasing interest in developing "super-resolution" optical methods that allow to overcome this diffraction barrier -from near-field optical microscopy [15] to structured illumination imaging [16] to metamaterials-based super-resolution [17] to twophoton luminescence and stimulated emission depletion microscopy [6] to stochastic optical reconstruction imaging [18] and photoactivated localization microscopy. [5] In particular, there is an increasing demand for the approach to optical imaging that is inherently label-free and does not rely on fluorescence, operates on the sample that is in the far-field from all elements of the imaging system, and offers the resolution comparable to that of the fluorescent microscopy. While seemingly a tall order, this task has recently found two possible solutions -that approach the problem from the "hardware" and "algorithmic" sides respectively. The former approach relies on the phenomenon of "super-oscillations" -where the band-limited function can, and -when properly designed -does oscillates faster that its fastest Fourier component. The actual super-oscillatory lenses that implement this behavior, have been designed and fabricated, [11,12] and optical resolution exceeding the conventional Abbe's limit, has been demonstrated in experiment. [11] The second approach relies on new methods of processing the "diffraction-limited" data, taking full advantage of the fact that actual targets (and especially biological samples) are often inherently sparse. [14] The resulting resolution improvement beyond the Abbe's limit due to this improved data processing has been demonstrated both in numerical simulations and in experiment. [7][8][9] Far-field optical resolution beyond the Abbe's limit in a scattering, rather than fluorescence -based approach, observed in Refs. [7][8][9][10][11][12], clearly demonstrates that Abbe's bound of half-wavelength (and its quarter-wavelength counterpart for structured illumination) is not a fundamental limit for optical imaging. This raises the key question of whether there's in fact a fundamental bound to the optical resolution -as opposed to "practical" limitations due to detector noise, imaging system imperfections, data processing time limits in the case when image reconstruction corresponds to an NP-complete problem, etc. Furthermore, the knowledge of the corresponding fundamental limit, if such exists, and the physical mechanism behind it, would help finding the way to the system that offers the optimal performance -just as deeper understanding of thermodynamics and Carnot's limit helped the design of practical heat engines.
In the present work, we show that there is in fact a fundamental limit on the resolution of far-field optical imaging, which is however much less stringent than Abbe's criterion. The presence of any finite amount of noise in the system, regardless of how small is its intensity, leads to a fundamental limit on the optical resolution, that can be expressed in the form of an effective uncertainly re-lation. This limit has essential information-theoretical nature, and can be connected to the Shannon's theory of information transmission in linear systems. [19] Definition of the resolution limit We define the diffraction limit ∆ as the shortest spatial scale of the object whose geometry can still be reconstructed, error-free, from the far-field optical measurements in the presence of noise. [20] Without loss of generality, one can then assume that the object is composed of arbitrary number of point scatterers of arbitrary amplitudes located at the nodes of the grid with the period ∆, as any additional structure in the sources (or scatterers [21]) or variations in position, will add to the information that needs to be recovered from far-field measurement for the successful reconstruction of the geometry of the object.
Furthermore, the essential "lower bound" nature of ∆ further allows to reduce the problem to that of an effectively one-dimensional target (formed by line, rather than point, sources) -since, as it was already known to M. André [22] and L. Raleigh, [23] line sources are "more easily resolvable" than point sources.
To calculate the fundamental resolution limit, is is therefore sufficient to consider the model system of an array of line "sources" of arbitrary (including zero) amplitudes, located at the node points of the grid with the period ∆ -see Fig. 1(a). Note that, in term of the information that is detected in the far field and the information that is necessary and sufficient for the target reconstruction, this problem is identical to that of a step mask where thickness and/or permittivity changes at the nodes of the same grid by the amounts proportional to the amplitudes of the corresponding line sources (as the point source distribution corresponds to the spatial derivative of the mask "profile") -see Fig. 1 Note that the reduction of the original problem to that of an effectively one-dimensional profile is not a simplification for the sake of convenience or reduction of the mathematical complexity of the problem. It is exactly this "digitized" one-dimensional profile that corresponds to the smallest "resolvable" spatial scale among all objects with a low bound on their spatial variations, and therefore defines the fundamental resolution limit. Furthermore, in many cases the actual object is formed by two (or more) materials that form sharp interfaces. In this case, the step mask that is equivalent to our point source model, offers an adequate representation of the actual target.
However, even within the original framework of "resolving" two point sources, [23] the result clearly depends on the difference of their amplitudes -with increasing disparity between the two leading to progressively worse "resolution". The "ultimate" resolution limit ∆ therefore corresponds to the case of identical point sources (or subwavelength scatterers), which are present only in an (unknown) fraction of the grid nodes. Note that such digital mask corresponds to the common case of a pattern formed by a single material (and e.g. the surrounding air) -see Fig. 1 The schematic representation of the imaging set-up, for the object formed by an array of small particles / lines (a) and a (binary) mask (b). D labels the position of a (coherent) detector, L is the size of the object (and equivalently the imaging aperture), and R is the distance from the object to the detector; in the far field R L.
When the distance to the detector R is much larger than the aperture L, R L (see Fig. 1), for each polarization we find (see Appendix A) where E 0 is the incident field "illuminating" the target, i is the (integer) index that labels the (point) scatterers with the corresponding polarizabilities α i , ρ i ≡ (x i , y i ), k ≡ (k x , k y ) is the wavevector with the magnitude |k| < ω/c ≡ k 0 , ω is the light frequency, and c is the speed of light (in the medium surrounding the target).
Equivalently, for the case of the object in the form of a (dielectric) mask (see Fig. 1(b)), we obtain where ∆ is the difference between the dielectric permittivities of the object and the background.
Here n (k) corresponds to the effective noise which includes the contributions from all origins (detector dark currents, illumination field fluctuations, etc.). Using data for imaging with different electromagnetic field polarizations, the effective noise can be correspondingly reduced.
Note that the model of Eqn. (1) or its equivalent (2) assumes coherent detection of the electromagnetic field in the far zone. This is essential for the definition of the fundamental resolution limit, as the phase information is in fact available in the far field, and can be measured even with an intensity only sensitive detector using e.g. optical heterodyne approach, [24] so that any failure to obtain the corresponding information in a given experimental setup cannot be attributed to the fundamental resolution limit of optical imaging.
Finally, for the calculation of the fundamental resolution limit ∆ we must assume the large aperture limit k 0 L 1. While the case of a small aperture k 0 L ≤ 1 can be easily implemented in the actual experimental setup (albeit at the cost of dramatic reduction in the field of view), the aperture in a close proximity to the object represents an example of a near-field probe, and this setup cannot be treated as a true far-field imaging.

Information-theoretical framework
To derive the fundamental limit on the resolution of optical imaging, we calculate the total amount of information about the object that can be recovered in the far field. As our Eqn. (1) can be interpreted as the input (E 0 ) -output ( {s} ) relation of a linear information channel, the amount of the actual information carried from the object to the far field detector, can be calculated using the standard methods of the information theory. [19] The resolution limit then follows from the requirement of the recovered information being sufficient to reconstruct the target: When the object is composed of M different materials (or is formed by an array of point sources with M different levels of amplitude), additional information is needed for its reconstruction, which leads to a more stringent bound on the spatial resolution, The actual transmitted information T can be obtained from the mutual information functional [19] Here, the entropy H [{s}] is the measure of the information received at the detector array.
However, as the system is noisy, for any output signal, there is some uncertainty of what was the originating field at the mask. The conditional entropy H [{s} | E 0 ] at the detector array for a given E 0 represents this uncertainty: Substituting the resulting analytical expressions for H . (8) Here SNR is the effective signal-to-noise ratio measured at the detector array and Super-resolution object reconstruction for a binary mask. The inset shows the schematics of the object profile. The main panel plots the error probability in the recovered profile, as a function of the effective signal-to-noise ratio, SNR. The data shown was obtained for 10000 different realizations. The boundary separating the light-red and light-green background, corresponds to the value of the signalto-noise ratio corresponding to ∆ sufficient to resolve the λ/16 spacing (see panel (a)). The red arrow indicates the minimum value of SNR when the numerical reconstruction produces no errors.

The Discussion
While Eqn. (8) allows for an unlimited resolution in a noise-free environment, even a relatively low noise dramatically alters this picture. With the weak logarithmic dependence of the resolution limit on the SNR, to reduce the resolution limit by e.g. a factor of ten, the signal to noise ratio needs to be increased by six orders of magnitude.
At the same time, the spatial resolution limit ∆ M depends on the effective "uncertainty" in the range of permittivity variations in the object that is being imagedthe simpler is the structure of the target, the easier is the task of finding its geometry. The ultimate value ∆ is then achieved in the case of a binary mask (i.e. the object that is formed by only two materials), and represents the fundamental bound to the resolution. In the case of a higher complexity in the composition of the target, the actual resolution limit ∆ M is well above ∆. When the number of materials (with their corresponding permittivities) that the object is composed of, M , is known a priori, the corresponding resolution limit ∆ M is defined by Eqn. (4). However, when no a priori information whatsoever is available, the limit to the resolution can be expressed as the effective uncertainty relation, that offers the lower bound on the product of the scaled spatial resolution δ x and the amplitude resolution δ . In the case when the object is composed of transparent materials (Im [ ] = 0) we obtain where F (t) ≡ 1/ log 2 (1/t) , the scaled spatial resolution δ x is defined as the ratio of ∆ to the Abbe's limit, and the scaled amplitude resolution corresponds to the uncertainty in the permittivity δ that is normalized to the difference between the smallest ( min ) and largest ( max ) permittivities in the object, δ ≡ δ /( max − min ). For a binary mask, δ = ( max − min ) /2, so that the scaled amplitude resolution δ = 1/2, and F (δ ) = 1, which reduces the uncertainly relation (11) to the fundamental limit ∆ of Eqn. (8).
For imaging with no a priori information, with the optimal data reconstruction algorithm Eqn. (11) represents a trade-off between the uncertainties in position and the amplitude of the recovered image. Note that, as follows from Eqn. (11), spatial resolution at the Abbe's limit corresponds to the relative amplitude uncertainty of at least 1/ √ 1 + 2 SNR. In the case of imaging a binary mask or a pattern of identical subwavelength particles, the actual resolution can reach the value of ∆ -which for a high signal-tonoise ratio can be substantially below the Abbe's limit. For example, in the structured illumination setup with SNR ∼ 10 −6 , we find ∆ ∼ λ/100. While reaching all the way to this limit with the data obtained in the standard imaging setup may be highly nontrivial, a straightforward algorithm described below that implements the amplitude constraint, offers spatial resolution well below the Abbe limit -see Fig. 2.
In the algorithm whose performance is shown in Fig.  2, the subwavelength binary mask (see the inset to Fig.  2) is recovered from its (band-limited) Fourier spectrum measured in the far-field, together with the constraint that limits its profile to only two values. While a finite amount of noise in the far-field measurements inevitably leads to errors, with the increase of the effective signalto-noise ratio SNR the corresponding error probability P err rapidly goes to zero. In particular, for the resolution of λ/16 in the example of Fig. 2, for the SNR beyond the value indicated by the red arrow, the numerical calculation with an ensemble of 10000 different realizations, showed no errors.
The light-red and light-green color backgrounds in Fig.  2 corresponds to the parameter range that respectively violates and satisfies the fundamental resolution limit of Eqn. (8). Note that the boundary separating these regimes, corresponds to the signal-to-noise ratio that is substantially less than the smallest value (shown by red arrow in Fig. 2) for the error-free performance in the data recovery -indicating that the reconstruction algorithm is far from optimal. Still, even with this performance, the example of Fig. 2 indicates that even a straightforward implementation of an a priori constraint on the object geometry (binary mask rather than an arbitrary profile), offers object reconstruction from diffraction-limited data with deeply subwavelength resolution (four times below Abbe's limit in the example of Fig. 2).
Additional a priori information about the object further reduces the resolution limit of optical imaging. For different cases of a priori available information about the target, particularly important is the case of sparse objects, as this property is wide spread in both natural and artificial systems. [14] If the target is a priori known to be sparse, with the effective sparsity parameter β (which can be defined as the fraction of empty "slots" in the grid superimposed on the target), we find For the numerical example studied in Ref. [7], with β 0.03 and SNR ∼ 10 2 , the resolution limit ∆ (β) 0.025λ. Accurate numerical reconstruction of the features on the scale of ∼ λ/10 demonstrated in Ref. [7], is therefore fully consistent with the fundamental limit ∆ (β) .
Imaging with a small aperture The explicit expression for the resolution limit in Eqn. (8) is presented in the large amplitude limit k 0 L 1. While this corresponds to the most common regime of actual optical microscopy, using a small aperture that's comparable to the free space wavelength can offer its own advantages. The resulting effect on the resolution limit is accounted for by the (positive definite) term O (1/k 0 L) in Eqn. (8), which further reduces ∆.
Note that this was precisely the regime where superoscillation based imaging was demonstrated in experiment, as the use of small aperture was essential to block the (exponentially) strong power side-lobes. While the resulting improvement of the resolution is consistent with the fundamental limit established in the present work, our expressions (4), (8), (12) do not explicitly indicate the advantage of super-oscillations approach. This should be contrasted to the case of sparsity-based imaging where its key parameter β explicitly enters the resolution limit in (12).
Indeed, while the super-oscillations imaging does offer subwavelength resolution -this improvement is the general feature of all structured illumination methods optimized for small aperture (or equivalently for imaging small isolated objects), and is not limited to the super-oscillation approach. This behavior is illustrated in Figs. 3, where a subwavelength target (red pattern in the center of Fig. 3(a)) is illuminated by the Bessel beam propagating in the direction normal to the plane of the picture. The beam axis is "focused" to the center of the target (see Fig. 3(a)), so that the illuminating field within the aperture not only shows no super-oscillations but in fact does not oscillate at all -see the field profiles for different orders m of the illuminating Bessel beams in Fig. 3(b)). Nevertheless, the standard data recovery algorithm clearly shows deep subwavelength resolution of ∼ λ/10 -see Fig. 3(c), despite having no a priori information about the structure of the target.
It should however be noted, that the super-oscillations based approach, when implemented to form a subwavelength focus spot that's used to scan the object, [11,12] is naturally suitable for optical imaging limited to incoherent detection, which offers substantial practical advantages in the actual implementation of the system.

Conclusions
In conclusion, we have derived the fundamental resolution limit for far-field optical imaging, and demon-strated that it is generally well below the standard halfthe-wavelength estimate. Our results also apply to other methods that rely on wave propagation and scatteringsuch as e.g. geophysical and ultrasound imaging. vidual (tensor) polarizabilities α i , leading to Note that these two formulations are essentially equivalent, as arbitrary dielectric permittivity profile can be expressed in terms of the electromagnetic response of a large group of small particles. [31] While Eqns. (A1) and (A2) are linear in the electrical field, when treated as inverse problems for the reconstruction of the unknown profile ∆ (r) and the distribution α i from the given illumination field E 0 (r) and the scattering data for E (r), they are essentially nonlinear in ∆ and α. [29] Physically, this nonlinearity originates from the multiple scattering effects within the object, [28] when the actual field acting on the given object, E (r), in addition to the incident field E 0 , also includes the contributions from the "secondary" waves scattered by the other parts of the object. While these multiple scattering corrections can be substantial in acoustic and microwave scattering, [28] for optical imaging of low-contrast media these are generally small. [30] Note however that, when substantially present, these "secondary" waves due to multiple light scattering, can have a profound effect on the imaging resolution [28] -as the subwavelength structure of the object now functions as a high spatial frequency grating forming an effective structured illumination pattern.
In the language of scattering theory, the conventional optical imaging and microscopy corresponds to the limit of weakly scattering semi-transparent objects, that neglects multiple scattering contributions. The resulting first order Born approximation [30] reduces the acting field in the integral of Eqn. (A1) and the sum of Eqn. (A2) to the (a priori known) illumination field E 0 , thus leading to a linear inverse problem.
The resulting expressions can be further simplified in the radiation zone, when the detectors are placed in the far-field of the object, k 0 |r − r i | 1, thus reducing e.g. Eqn. (A2) to When the distance to the detector r is much larger than the aperture L, r L, for the far-field signal detected in the given polarization and the wavevector k (see Fig.  1) we find where ρ i ≡ (x i , y i ), k ≡ (k x , k y ) with the magnitude |k| < k 0 , and n is the noise in the detector positioned in the corresponding detector (see Fig. 1). Similarly, if the target is represented with the 2D permittivity mask (x, y) (corresponding to the Motti projection [32] of the actual 3D permittivity of the object), we obtain s (k) = d 2 ρ ∆ (ρ) E 0 (ρ i ) exp (ik · ρ i ) + n (k) . (A5)