Proc. SPIE. 10033, Eighth International Conference on Digital Image Processing (ICDIP 2016)
KEYWORDS: Super resolution, Magnetic resonance imaging, Image processing, Image restoration, Image resolution, Medical imaging, Associative arrays, 3D image processing, Lawrencium, 3D magnetic resonance imaging
Clinical practice requires multiple scans with different modalities for diagnostic tasks, but each scan does not produce the image of the same resolution. Such phenomenon may influence the subsequent analysis such as registration or multimodal segmentation. Therefore, performing super-resolution (SR) on clinical images is needed. In this paper, we present a unified SR framework which takes advantages of two primary SR approaches – self-learning SR and learning-based SR. Through the self-learning SR process, we succeed in obtaining a second-order approximation of the mapping functions between low and high resolution image patches, by leveraging a local regression model and multi-scale self-similarity. Through the learning-based SR process, such patch relations are further refined by using the information from a reference HR image. Extensive experiments on open-access MRI images have validated the effectiveness of the proposed method. Compared to other advanced SR approaches, the proposed method provides more realistic HR images with sharp edges.
The challenge of learning-based superresolution (SR) is to predict the relationships between low-resolution (LR) patches and their corresponding high-resolution (HR) patches. By learning such relationships from external training images, the existing learning-based SR approaches are often affected by the relevance between the training data and the LR input image. Therefore, we propose a single-image SR method that learns the LR-HR relations from the given LR image itself instead of any external images. Both the local regression model and nonlocal patch redundancy are exploited in the proposed method. The local regression model is employed to derive the mapping functions between self-LR-HR example patches, and the nonlocal self-similarity gives rise to a high-order derivative estimation of the derived mapping function. Moreover, to fully exploit the multiscale similarities inside the LR input image, we accumulate the previous reconstruction results and their corresponding LR versions as additional example patches for the subsequent estimation process, and adopt a gradual magnification scheme to achieve the desired zooming size step by step. Extensive experiments on benchmark images have validated the effectiveness of the proposed method. Compared to other state-of-the-art SR approaches, the proposed method provides photorealistic HR images with sharp edges.