From Event: SPIE Optical Engineering + Applications, 2019
Subjective quality feedback by actual human viewers is crucial for reliable evaluation of various solutions and configurations for video processing or encoding. However, it is generally a time consuming and expensive process. Therefore, in many cases evaluation of video quality is done using objective measures, which may be poorly correlated with actual subjective results. In order to address this issue, we have developed VISTA (VIsual Subjective Testing Application), an easy to use application for visually comparing pairs of video sequences played synchronously side by side, and a user interface for indicating the relative subjective quality of the two video sequences. In addition, we developed a system for automating the evaluation process called Auto-VISTA. The system receives as input guidelines for the required testing session, prepares the content to be compared, launches the app in a crowdsourcing Internet marketplace (such as Amazon Mechanical Turk), and performs collection and analysis of the results. Thus, obtaining large scale subjective feedback becomes cheap and accessible, which in turn allows for fast and reliable evaluation cycles of different video encoding and processing solutions, or tuning various configurations and settings for a given solution.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Tamar Shoham, Dror Gill, Sharon Carmel, and Dan Julius, "Overnight large-scale subjective video quality assessment using automatic test generation and crowdsourcing Internet marketplaces," Proc. SPIE 11137, Applications of Digital Image Processing XLII, 111370R (Presented at SPIE Optical Engineering + Applications: August 13, 2019; Published: 6 September 2019); https://doi.org/10.1117/12.2528012.