Paper
29 November 2007 Video semantics discovery from video captions and comments
Author Affiliations +
Abstract
To improve the retrieval accuracy of content-based video retrieval systems, researchers face a hard challenge that is reducing the 'semantic gap' between the extracted features of the systems and the richness of human semantics. This paper presents a novel video retrieval system to bridge the semantic gap. Firstly, the video captions are segmented from the video and then are transformed into text format. To extract the semantic information from the video streaming we apply a text mining process, which adopts a cluster algorithm as a kernel, on the text format captions. On the other hand, in this system, users are requested to comment on the video which they download from the system when they have watched the video. Then we associate the users' comments with the video on the system. The same text mining process is used to deal with the comment texts. We combine the captions of the video with the comments on the video to extract the semantic information of the video more accurately. Finally, taking advantage of the comments and the captions of the video, we performed experiments on a set of videos and obtained promising results.
© (2007) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Rongteng Wu, Jizhou Sun, Jinyan Chen, and Huabei Wu "Video semantics discovery from video captions and comments", Proc. SPIE 6833, Electronic Imaging and Multimedia Technology V, 68332F (29 November 2007); https://doi.org/10.1117/12.755871
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Semantic video

Neurons

Feature extraction

Mining

Databases

Vector spaces

RELATED CONTENT


Back to Top