Paper
6 March 2015 Comparing humans to automation in rating photographic aesthetics
Author Affiliations +
Proceedings Volume 9408, Imaging and Multimedia Analytics in a Web and Mobile World 2015; 94080C (2015) https://doi.org/10.1117/12.2084991
Event: SPIE/IS&T Electronic Imaging, 2015, San Francisco, California, United States
Abstract
Computer vision researchers have recently developed automated methods for rating the aesthetic appeal of a photograph. Machine learning techniques, applied to large databases of photos, mimic with reasonably good accuracy the mean ratings of online viewers. However, owing to the many factors underlying aesthetics, it is likely that such techniques for rating photos do not generalize well beyond the data on which they are trained. This paper reviews recent attempts to compare human ratings, obtained in a controlled setting, to ratings provided by machine learning techniques. We review methods to obtain meaningful ratings both from selected groups of judges and also from crowd sourcing. We find that state-of-the-art techniques for automatic aesthetic evaluation are only weakly correlated with human ratings. This shows the importance of obtaining data used for training automated systems under carefully controlled conditions.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ramakrishna Kakarala, Abhishek Agrawal, and Sandino Morales "Comparing humans to automation in rating photographic aesthetics", Proc. SPIE 9408, Imaging and Multimedia Analytics in a Web and Mobile World 2015, 94080C (6 March 2015); https://doi.org/10.1117/12.2084991
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Photography

Databases

Binary data

Machine learning

RGB color model

Cameras

Zoom lenses

RELATED CONTENT


Back to Top