Translator Disclaimer
31 January 2020 Unsupervised generation of artistic representations
Author Affiliations +
Proceedings Volume 11433, Twelfth International Conference on Machine Vision (ICMV 2019); 114332U (2020)
Event: Twelfth International Conference on Machine Vision, 2019, Amsterdam, Netherlands
While deep neural networks excel at a variety of visual tasks, obtaining large quantities of labeled data remains exorbitantly expensive or time-consuming, especially when it comes to pairs of photos and their artistic representations. To overcome the burden of annotation, various solutions exploiting the unlabeled data have been proposed recently. In this paper, we present a novel approach to the unsupervised domain adaptation problem, allowing us to successfully generate avatars from photos. Assembling a system of several neural networks, including a Generative Adversarial Network (GAN), fully trained on unlabeled data, we researched the influence of various factors on the GAN training process and eventually built a system superior to current analogues. In contrast to the existing unsupervised domain adaption approaches, the proposed solution is highly flexible, which allows tuning of individual elements of the system to achieve different visual results. During the user study, we evaluated the performance of the proposed method and found results close to the human level quality.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Roman Steinberg and Sergey Kastryulin "Unsupervised generation of artistic representations", Proc. SPIE 11433, Twelfth International Conference on Machine Vision (ICMV 2019), 114332U (31 January 2020);

Back to Top