People with hearing loss make use of visual speech cues to supplement the impoverished speech signal. This process, known as speechreading (or lipreading) can be very effective because of the complementary nature of auditory and visual speech cues. Despite the importance of visual speech cues (for both normal-hearing and hearing-impaired people) research on the visual characteristics of speech has lagged behind research on the acoustic characteristics of speech. The field of acoustic phonetics benefited substantially from the availability of powerful techniques for acoustic signal analysis. The substantial, recent advances in optical signal processing have opened up new vistas for visual speech analysis analogous to the way technological innovation revolutionized the field of acoustic phonetics. This paper describes several experiments in the emerging field of optic phonetics.
Harry Levitt, Harry Levitt,
"Visual signal processing, speechreading, and related issues", Proc. SPIE 5007, Human Vision and Electronic Imaging VIII, (17 June 2003); doi: 10.1117/12.501207; https://doi.org/10.1117/12.501207