Virtual studios, that synthesize video-camera images with CG objects of studio sets and props, are commonly used in professional program production. Since the actors cannot directly see the CG objects in the studio, they must estimate the positions of the CG objects by watching synthesized video displayed on a monitor placed within their field of view. However, it is difficult for actors to look at a monitor while acting because their acting becomes unnatural if gaze goes toward the monitor. Vibration feedback can be used as a way of indicating the positions of CG objects without the actor having to look at the synthesized video and without affecting the shooting or sound pickup. This method attaches a small vibration device to the actor’s body and communicates the distance to the CG objects by varying the intensity and pattern of vibration. A system to verify this method was developed. Subjects evaluated whether the surface of a CG object can be recognized and whether the distance from the surface of the CG object can be estimated on a four-level scale for four cases in an experiment that simulates interactions with a studio set and props. The experimenter observed whether the subject's hand penetrated the CG object in the synthesized video. As a result, it was found that vibration feedback is very useful for interactions with studio sets except for close interactions like handling props.
The spread of broadband networks has resulted in the spread of countless videos on the internet. Advances in video analysis technology make it possible to extract more exact metadata and make it easier to find the video we want. However, search services are not suitable for ambiguous affective video searches in which a specific query cannot be given, such as whether you want to watch while relaxing. To solve this problem, this paper considers video search method that can handle such ambiguous requests by utilizing existing search services and propose a method of replacing ambiguous requests by queries that are a group of multiple metadata of something concrete such as names and situations of objects. This paper performs classification experiments with three ambiguous requests using multiple metadata automatically attached to the videos by Google Cloud Video Intelligence and so on, and confirms that automatic classification by machine learning performs close to manual classification. This suggests that the proposed method is feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.