Proceedings Article | 19 February 2020
Proc. SPIE. 11310, Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR)
KEYWORDS: Human-machine interfaces, Mixed reality, Visualization, Augmented reality, Human-computer interaction, Head-mounted displays, Cognition, Virtual reality
Eye-tracking hardware and software are being rapidly integrated into mixed reality (MR) technology. Cognitive science and human-computer interaction (HCI) research demonstrate several ways eye-tracking can be used to gauge user characteristics, intent, and status as well as provide active and passive input control to MR interfaces. In this paper, we argue that eye-tracking can be used to ground MR technology in the cognitive capacities and intentions of users and that such human-centered MR is important for MR designers and engineers to consider. We detail relevant and timely research in eye-tracking and MR and offer suggestions and recommendations to accelerate the development of eye-tracking-enabled human-centered MR, with a focus on recent research findings. We identify several promises that eye-tracking holds for improving MR experiences. In the near term, these include user authentication, gross interface interactions, monitoring visual attention across real and virtual scene elements, and adaptive graphical rendering enabled by relatively coarse eyetracking metrics. In the far term, hardware and software advances will enable gaze-depth aware foveated MR displays and attentive MR user interfaces that track user intent and status using fine and dynamic aspects of gaze. Challenges, such as current technological limitations, difficulties in translating lab-based eye-tracking metrics to MR, and heterogeneous MR use cases are considered alongside cutting-edge research working to address them. With a focused research effort grounded in an understanding of the promises and challenges for eye-tracking, human-centered MR can be realized to improve the efficacy and user experience of MR.