Image restoration is a process used to remove blur (from different sources like object motion or aberrations) from images by either non-blind or blind-deconvolution. The metrics commonly used to quantify the restoration process are peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). Often only a small sample of test images are used (like Lena or the camera guy). In optical design research PSNR and SSIM are not normally used, here image quality metrics based on linear system theory (e.g. modulation transfer function, MTF) are used to quantify optical errors like spherical or chromatic aberration. In this article we investigate how different image restoration algorithms can be quantified by applying image quality metrics. We start with synthetic image data that is used in camera test stands (e.g. Siemens star etc.), apply two different spatially variant degradation algorithms, and restore the original image by a direct method (Wiener filtering within sub-images), and by an iterative method (alternating direction method of multipliers, ADMM). Afterwards we compare the quality metrics (like MTF curves) for the original, the degraded and the restored image. As a first result we show that restoration algorithms sometimes fail in dealing with non-natural scenes, e.g. slanted-edge targets. Further, these first results indicate a correlation between degradation and restoration, i.e. the restoration algorithms are not capable of removing the optically relevant errors introduced by the degradation, a fact neither visible nor available from the PSNR values. We discuss the relevance in the context of the automotive industry, where image restoration may yield distinct advantages for camera-based applications, but testing methods rely on the used image quality metrics.