6 March 2015 Comparing humans to automation in rating photographic aesthetics
Author Affiliations +
Abstract
Computer vision researchers have recently developed automated methods for rating the aesthetic appeal of a photograph. Machine learning techniques, applied to large databases of photos, mimic with reasonably good accuracy the mean ratings of online viewers. However, owing to the many factors underlying aesthetics, it is likely that such techniques for rating photos do not generalize well beyond the data on which they are trained. This paper reviews recent attempts to compare human ratings, obtained in a controlled setting, to ratings provided by machine learning techniques. We review methods to obtain meaningful ratings both from selected groups of judges and also from crowd sourcing. We find that state-of-the-art techniques for automatic aesthetic evaluation are only weakly correlated with human ratings. This shows the importance of obtaining data used for training automated systems under carefully controlled conditions.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ramakrishna Kakarala, Abhishek Agrawal, Sandino Morales, "Comparing humans to automation in rating photographic aesthetics", Proc. SPIE 9408, Imaging and Multimedia Analytics in a Web and Mobile World 2015, 94080C (6 March 2015); doi: 10.1117/12.2084991; https://doi.org/10.1117/12.2084991
PROCEEDINGS
10 PAGES


SHARE
RELATED CONTENT

Embedded image enhancement for high-throughput cameras
Proceedings of SPIE (March 05 2014)
Machine learning for adaptive bilateral filtering
Proceedings of SPIE (March 16 2015)
Third and first party ground truth collection for auto...
Proceedings of SPIE (February 12 2007)
Automatic white balance method for cellular phone
Proceedings of SPIE (January 18 2010)
Adaptive quality improvement method for color images
Proceedings of SPIE (May 01 1994)

Back to Top