Translator Disclaimer
18 January 2010 Videospace: classification of video through shooting context information
Author Affiliations +
Abstract
We present a Videospace framework for classification of selected videos with chosen user-groups, device-types or device-classes. Photospace has proven to be effective in classifying large amounts of still images via simple technical parameters. We use the measures of subject-camera distance, scene lighting and object motion to classify single videos and finally represent all videos of the chosen group in a 3-dimensional space. An expert-rated sample of video was collected to obtain an estimation of the parameters for a chosen group of videos. Sub-groups of videos were found using Videospace measures. The presented framework can be used to obtain information about technical requirements of general device use and typical shooting conditions of the end users. Future measurement efficiency and precision could be improved by using computer-based algorithms or device based measurement techniques to obtain better samples Videospace parameters. Videospace information could be used for finding the most meaningful benchmarking contexts or getting information about shooting in general with chosen devices or devices groups. Using information about typical parameters for a chosen video group, algorithm and device development can be focused on typical shooting situations, if processing power and device-size are otherwise reduced.
© (2010) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
T. Säämänen, T. Virtanen, and G. Nyman "Videospace: classification of video through shooting context information", Proc. SPIE 7529, Image Quality and System Performance VII, 752906 (18 January 2010); https://doi.org/10.1117/12.839414
PROCEEDINGS
7 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT


Back to Top