28 December 1998 Real-time video-based rendering for augmented spatial communication
Author Affiliations +
Abstract
In the field of 3-D image communication and virtual reality, it is very important to establish a method of displaying arbitrary views of a 3-D scene. It is sure that the 3-D geometric models of scene objects are very useful for this purpose, since computer graphics techniques can synthesize arbitrary views of the models. It is, however, not so easy to obtain the models of objects in the physical world. In order to avoid this problem, a new technique, called image-based rendering, has been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. To date, most of the works on this new technique has been concentrated on static scenes or objects. In order to cope with 3-D scenes in motion, we must establish the ways of processing multiple video sequences in real-time, and constructing accurate camera array system. In this paper, the authors propose a real-time method of rendering arbitrary views of 3-D scenes in motion. The proposed method realizes a sixteen camera array system with software adjusting support and a video-based rendering system. According to the observer's viewpoint, appropriate views of 3- D scenes are synthesized in real-time. Experimental results show the potential applicability of the proposed method to the augmented spatial communication systems.
© (1998) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Takeshi Naemura, Takeshi Naemura, Hiroshi Harashima, Hiroshi Harashima, } "Real-time video-based rendering for augmented spatial communication", Proc. SPIE 3653, Visual Communications and Image Processing '99, (28 December 1998); doi: 10.1117/12.334713; https://doi.org/10.1117/12.334713
PROCEEDINGS
12 PAGES


SHARE
Back to Top