17 May 2013 Recognition of 3D facial expression from posed data
Author Affiliations +
Although recognition of facial expression in 3D facial images has been an active research area, most of the prior works are limited to using full frontal facial images. These techniques primarily project 3D facial image on 2D and manually select landmarks in 2D projection to extract relevant features. Face recognition in 2D images can be challenging due to unconstrained conditions such as head pose, occlusion, and resulting loss of data. Similarly, pose variation in 3D facial imaging can also result in loss of data. In most of the current 3D facial recognition works, when 3D posed face data are projected onto 2D, additional data loss may render 2D facial expression recognition even more challenging. In comparison, this work proposes novel feature extraction directly from the 3D facial posed images without the need of manual selection of landmarks or projection of images in 2D space. This feature is obtained as the angle between consecutive 3D normal vectors on the vertex points aligned either horizontally or vertically across the 3D facial image. Our facial expression recognition results show that the feature obtained from vertices aligned vertically across the face yields the best accuracy for classification with an average 87.8% area under the ROC. The results further suggest that the same feature outperforms its horizontal counterpart in recognizing facial expressions for pose variation between 35º - 50º with average accuracy of 80% - 60%, respectively.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Manar D. Samad, Manar D. Samad, Khan M. Iftekharuddin, Khan M. Iftekharuddin, "Recognition of 3D facial expression from posed data", Proc. SPIE 8738, Three-Dimensional Imaging, Visualization, and Display 2013, 87380X (17 May 2013); doi: 10.1117/12.2015603; https://doi.org/10.1117/12.2015603

Back to Top