Translator Disclaimer
7 February 2019 Head pose estimation using deep multitask learning
Author Affiliations +
Abstract
Head pose estimation (HPE) plays a vital role in the field of human–computer interaction and remains a challenging task due to individual differences. To mitigate this issue, we propose a simple yet effective deep multitask learning framework for joint HPE and face verification (FV), where FV acts as an auxiliary task to boost the performance of HPE. The framework comprises a backbone net, data separate module, and two branches for HPE and FV, respectively. Considering other regions beyond those of the face provides useful information for HPE while FV should focus only on face regions, two kinds of regions, head and face, which share common feature representations in the backbone net, are separated by the data separate module and then fed into the appropriate branches. Kullback–Leibler divergence loss and L2-constrained softmax loss are connected to the end of the HPE branch and FV branch, respectively, to optimize the architecture. The proposed method is validated on three publicly available datasets: Pointing04, CAS-PEAL-R1, and CMU multi-PIE. The experimental results demonstrate that our method surpasses the state of the art, showing up to a 16.38% improvement on the famous benchmark, Pointing04. The best accuracies we report on these datasets are 89.53%, 99.74%, and 99.72%, respectively.
© 2019 SPIE and IS&T 1017-9909/2019/$25.00 © 2019 SPIE and IS&T
Luhui Xu, Jingying Chen, and Yanling Gan "Head pose estimation using deep multitask learning," Journal of Electronic Imaging 28(1), 013029 (7 February 2019). https://doi.org/10.1117/1.JEI.28.1.013029
Received: 24 September 2018; Accepted: 17 January 2019; Published: 7 February 2019
JOURNAL ARTICLE
12 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT


Back to Top