27 December 2000 Motion feature extraction for content-based video sequence retrieval
Author Affiliations +
Abstract
In this paper we present a region based approach for Short Term Motion analysis and retrieval of video sequences. Our feature extraction scheme converts the motion information of a video frame pair into a combination of different symbols. First the system analyzes the global and local motion to get a dense optical flow field for every frame pair. The local optical flow field is segmented using an affine model based region growing method. The affine model parameters of the segmented regions as well as the region size from a 7 dimensional space, which is partitioned by a vector quanitzer. Each region is then mapped to a code book symbol of the quantizer. With a group of symbols representing each frame pair, we borrow the Vector Space Model and TF*IDF scoring from text document retrieval to index and retrieve their motion information. Preliminary experimental results are shown in the paper. Our approach is able to retrieve complex combination of different motion in the video, and can be easily scaled up to form a shot level descriptor as well as integrated with other video features.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Tianli Yu, Tianli Yu, Yujin Zhang, Yujin Zhang, } "Motion feature extraction for content-based video sequence retrieval", Proc. SPIE 4311, Internet Imaging II, (27 December 2000); doi: 10.1117/12.411912; https://doi.org/10.1117/12.411912
PROCEEDINGS
11 PAGES


SHARE
Back to Top