9 May 2018 Automatic speech recognition for launch control center communication using recurrent neural networks with data augmentation and custom language model
Author Affiliations +
Abstract
Transcribing voice communications in NASA’s launch control center is important for information utilization. However, automatic speech recognition in this environment is particularly challenging due to the lack of training data, unfamiliar words in acronyms, multiple different speakers and accents, and conversational characteristics of speaking. We used bidirectional deep recurrent neural networks to train and test speech recognition performance. We showed that data augmentation and custom language models can improve speech recognition accuracy. Transcribing communications from the launch control center will help the machine analyze information and accelerate knowledge generation.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kyongsik Yun, Kyongsik Yun, Joseph Osborne, Joseph Osborne, Madison Lee, Madison Lee, Thomas Lu, Thomas Lu, Edward Chow, Edward Chow, } "Automatic speech recognition for launch control center communication using recurrent neural networks with data augmentation and custom language model", Proc. SPIE 10652, Disruptive Technologies in Information Sciences, 1065202 (9 May 2018); doi: 10.1117/12.2304569; https://doi.org/10.1117/12.2304569
PROCEEDINGS
7 PAGES + PRESENTATION

SHARE
Back to Top