12 February 2007 Improving video captioning for deaf and hearing-impaired people based on eye movement and attention overload
Author Affiliations +
Abstract
Deaf and hearing-impaired people capture information in video through visual content and captions. Those activities require different visual attention strategies and up to now, little is known on how caption readers balance these two visual attention demands. Understanding these strategies could suggest more efficient ways of producing captions. Eye tracking and attention overload detections are used to study these strategies. Eye tracking is monitored using a pupilcenter- corneal-reflection apparatus. Afterward, gaze fixation is analyzed for each region of interest such as caption area, high motion areas and faces location. This data is also used to identify the scanpaths. The collected data is used to establish specifications for caption adaptation approach based on the location of visual action and presence of character faces. This approach is implemented in a computer-assisted captioning software which uses a face detector and a motion detection algorithm based on the Lukas-Kanade optical flow algorithm. The different scanpaths obtained among the subjects provide us with alternatives for conflicting caption positioning. This implementation is now undergoing a user evaluation with hearing impaired participants to validate the efficiency of our approach.
© (2007) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
C. Chapdelaine, V. Gouaillier, M. Beaulieu, L. Gagnon, "Improving video captioning for deaf and hearing-impaired people based on eye movement and attention overload", Proc. SPIE 6492, Human Vision and Electronic Imaging XII, 64921K (12 February 2007); doi: 10.1117/12.703344; https://doi.org/10.1117/12.703344
PROCEEDINGS
11 PAGES


SHARE
Back to Top