28 January 2008 Audio scene segmentation for video with generic content
Author Affiliations +
Proceedings Volume 6820, Multimedia Content Access: Algorithms and Systems II; 68200S (2008); doi: 10.1117/12.760267
Event: Electronic Imaging, 2008, San Jose, California, United States
In this paper, we present a content-adaptive audio texture based method to segment video into audio scenes. The audio scene is modeled as a semantically consistent chunk of audio data. Our algorithm is based on "semantic audio texture analysis." At first, we train GMM models for basic audio classes such as speech, music, etc. Then we define the semantic audio texture based on those classes. We study and present two types of scene changes, those corresponding to an overall audio texture change and those corresponding to a special "transition marker" used by the content creator, such as a short stretch of music in a sitcom or silence in dramatic content. Unlike prior work using genre specific heuristics, such as some methods presented for detecting commercials, we adaptively find out if such special transition markers are being used and if so, which of the base classes are being used as markers without any prior knowledge about the content. Our experimental results show that our proposed audio scene segmentation works well across a wide variety of broadcast content genres.
© (2008) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Feng Niu, Naveen Goela, Ajay Divakaran, Mohamed Abdel-Mottaleb, "Audio scene segmentation for video with generic content", Proc. SPIE 6820, Multimedia Content Access: Algorithms and Systems II, 68200S (28 January 2008); doi: 10.1117/12.760267; https://doi.org/10.1117/12.760267


Video surveillance

Analytical research

Data modeling

Information visualization

FDA class I medical device development

Image segmentation


Back to Top