4 February 2010 3D BSM for face segmentation and landmarks detection
Author Affiliations +
Abstract
An extension of Bayesian Shape Models (BSM) to 3D space is presented. The extension is based on the inclusion of shape information into the fitting functions. Shape information consists of 3D shape descriptors they are derived from curvature, and were selected by considering the relevance of the feature. We also introduce the use of functions to define the separation of face regions. In order to extract the features, the 3D BSM is deformed iteratively by looking for the vertices that best match the shape, by using a point model distribution obtained from train dataset. As result of the fitting process, a 3D face model oriented in frontal position and segmented in 48 regions is obtained, over this model 15 landmarks are extracted. The 3D BSM was trained with 150 3D face models from two different databases, and evaluated using a leave-one-out scheme. The model segmentation and the landmark location were compared against a ground truth segmentation and point location. From this comparison it is possible to affirm that the proposed system has an accuracy of approximately one millimeter, and the orientation of the models in frontal position has an average error of more or less 1.5 degrees.
© (2010) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Augusto E. Salazar, Augusto E. Salazar, Flavio A. Prieto, Flavio A. Prieto, } "3D BSM for face segmentation and landmarks detection", Proc. SPIE 7526, Three-Dimensional Image Processing (3DIP) and Applications, 752608 (4 February 2010); doi: 10.1117/12.837655; https://doi.org/10.1117/12.837655
PROCEEDINGS
11 PAGES


SHARE
Back to Top