6 October 2015 Video analysis using spatiotemporal descriptor and kernel extreme learning machine for lip reading
Author Affiliations +
Abstract
Lip-reading techniques have shown bright prospects for speech recognition under noisy environments and for hearing-impaired listeners. We aim to solve two important issues regarding lip reading: (1) how to extract discriminative lip motion features and (2) how to establish a classifier that can provide promising recognition accuracy for lip reading. For the first issue, a projection local spatiotemporal descriptor, which considers the lip appearance and motion information at the same time, is utilized to provide an efficient representation of a video sequence. For the second issue, a kernel extreme learning machine (KELM) based on the single-hidden-layer feedforward neural network is presented to distinguish all kinds of utterances. In general, this method has fast learning speed and great robustness to nonlinear data. Furthermore, quantum-behaved particle swarm optimization with binary encoding is introduced to select the appropriate feature subset and parameters for KELM training. Experiments conducted on the AVLetters and OuluVS databases show that the proposed lip-reading method achieves a superior recognition accuracy compared with two previous methods.
© 2015 SPIE and IS&T
Longbin Lu, Xinman Zhang, Xuebin Xu, Dongpeng Shang, "Video analysis using spatiotemporal descriptor and kernel extreme learning machine for lip reading," Journal of Electronic Imaging 24(5), 053023 (6 October 2015). https://doi.org/10.1117/1.JEI.24.5.053023 . Submission:
JOURNAL ARTICLE
11 PAGES


SHARE
Back to Top