Paper
24 November 2021 Depth-based fusion network for human attention prediction in 3D light field
Author Affiliations +
Proceedings Volume 12066, AOPC 2021: Micro-optics and MOEMS; 1206612 (2021) https://doi.org/10.1117/12.2606165
Event: Applied Optics and Photonics China 2021, 2021, Beijing, China
Abstract
The human visual attention mechanism promotes human to acquire the most important cues from large amount of information. However, most methods of simulating human visual attention now focus on 2D display, people know little about how human assign their visual attention under a 3D display. This paper firstly produced a saliency dataset consisting of human eye-fixation data for different 3D scenes under human quick glance, which demonstrated the human visual attention distribution. By analyzing the dataset, an approach based on convolutional neural network for human visual attention prediction under 3D light field display was proposed. The network is composed of three parts which are two-way feature extraction, feature fusion and prediction output. Comparing with saliency prediction models under 2D display devices, our proposed model can predict the distribution of human visual attention under 3D light field more accurately. This research promote further investigation of 3D applications such as 3D device evaluation and 3D content production.
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ziling Wen, Xinzhu Sang, Binbin Yan, Peng Wang, and Duo Chen "Depth-based fusion network for human attention prediction in 3D light field", Proc. SPIE 12066, AOPC 2021: Micro-optics and MOEMS, 1206612 (24 November 2021); https://doi.org/10.1117/12.2606165
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
RGB color model

3D displays

Visualization

3D modeling

Eye models

Image processing

3D visualizations

Back to Top