6 March 2015 Comparing humans to automation in rating photographic aesthetics
Author Affiliations +
Abstract
Computer vision researchers have recently developed automated methods for rating the aesthetic appeal of a photograph. Machine learning techniques, applied to large databases of photos, mimic with reasonably good accuracy the mean ratings of online viewers. However, owing to the many factors underlying aesthetics, it is likely that such techniques for rating photos do not generalize well beyond the data on which they are trained. This paper reviews recent attempts to compare human ratings, obtained in a controlled setting, to ratings provided by machine learning techniques. We review methods to obtain meaningful ratings both from selected groups of judges and also from crowd sourcing. We find that state-of-the-art techniques for automatic aesthetic evaluation are only weakly correlated with human ratings. This shows the importance of obtaining data used for training automated systems under carefully controlled conditions.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ramakrishna Kakarala, Ramakrishna Kakarala, Abhishek Agrawal, Abhishek Agrawal, Sandino Morales, Sandino Morales, } "Comparing humans to automation in rating photographic aesthetics", Proc. SPIE 9408, Imaging and Multimedia Analytics in a Web and Mobile World 2015, 94080C (6 March 2015); doi: 10.1117/12.2084991; https://doi.org/10.1117/12.2084991
PROCEEDINGS
10 PAGES


SHARE
RELATED CONTENT


Back to Top