Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and
tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and
instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy,
and may exhibit insuffcient spatial and temporal resolution. In particular, several external effects blur images.
Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution
(SR). The stability of these methods depends on having more than one image of the same frame. Differences
between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art
SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between
images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of
current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a
variational method that minimizes a regularized energy function with respect to the high resolution image and
blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution
image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good
SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described.
Comparative experiments on real data illustrate the robustness and utilization of both methods.