Presentation + Paper
9 May 2018 Automatic speech recognition for launch control center communication using recurrent neural networks with data augmentation and custom language model
Kyongsik Yun, Joseph Osborne, Madison Lee, Thomas Lu, Edward Chow
Author Affiliations +
Abstract
Transcribing voice communications in NASA’s launch control center is important for information utilization. However, automatic speech recognition in this environment is particularly challenging due to the lack of training data, unfamiliar words in acronyms, multiple different speakers and accents, and conversational characteristics of speaking. We used bidirectional deep recurrent neural networks to train and test speech recognition performance. We showed that data augmentation and custom language models can improve speech recognition accuracy. Transcribing communications from the launch control center will help the machine analyze information and accelerate knowledge generation.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kyongsik Yun, Joseph Osborne, Madison Lee, Thomas Lu, and Edward Chow "Automatic speech recognition for launch control center communication using recurrent neural networks with data augmentation and custom language model", Proc. SPIE 10652, Disruptive Technologies in Information Sciences, 1065202 (9 May 2018); https://doi.org/10.1117/12.2304569
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Speech recognition

Data modeling

Neural networks

Data centers

Data communications

Detection and tracking algorithms

Machine vision

Back to Top