Images and videos are subject to a wide variety of distortions during acquisition, digitizing, processing, restoration,
compression, storage, transmission and reproduction, any of which may result in degradation in visual quality.
That is why image quality assessment plays a major role in many image processing applications.
Image and video quality metrics can be classified by using a number of criteria such as the type of the application
domain, the predicted distortion (noise, blur, etc.) and the type of information needed to assess the quality (original
image, distorted image, etc.).
In the literature, the most reliable way of assessing the quality of an image or of a video is subjective evaluation ,
because human beings are the ultimate receivers in most applications. The subjective quality metric, obtained from a
number of human observers, has been regarded for many years as the most reliable form of quality measurement.
However, this approach is too cumbersome, slow and expensive for most applications .
So, in recent years a great effort has been made towards the development of quantitative measures. The objective
quality evaluation is automated, done in real time and needs no user interaction. But ideally, such a quality
assessment system would perceive and measure image or video impairments just like a human being .
The quality assessment is so important and is still an active and evolving research topic because it is a central issue in
the design, implementation, and performance testing of all systems [4, 5].
Usually, the relevant literature and the related work present only a state of the art of metrics that are limited to a
specific application domain. The major goal of this paper is to present a wider state of the art of the most used
metrics in several application domains such as compression , restoration , etc.
In this paper, we review the basic concepts and methods in subjective and objective image/video quality assessment
research and we discuss their performances and drawbacks in each application domain. We show that if in some
domains a lot of work has been done and several metrics were developed, on the other hand, in some other domains a
lot of work has to be done and specific metrics need to be developed.