22 December 2000 Lip animation based on observed 3D speech dynamics
Author Affiliations +
Abstract
We are all experts in the perception and interpretation of faces and their dynamics. This makes facial animation a particularly demanding area of graphics. Increasingly, computer vision is brought to bear and 3D models and their motions are learned from observations. The paper subscribes to this strand for the 3D modeling of human speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from faces with a few markers. A 3D reconstruction of a speaking face is produced for each video frame. A topological mask of the lower half of the face is fitted to the motion. The 3D shape statistics are extracted and principal components analysis reduces the dimension of the maskspace. The final speech tracker can work without markers, as it is only allowed to roam this constrained space of masks. Upon the representation of the different visemes in this space, speech or text can be used as input for animation.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Gregor A. Kalberer, Luc J. Van Gool, "Lip animation based on observed 3D speech dynamics", Proc. SPIE 4309, Videometrics and Optical Methods for 3D Shape Measurement, (22 December 2000); doi: 10.1117/12.410873; https://doi.org/10.1117/12.410873
PROCEEDINGS
10 PAGES


SHARE
Back to Top