This paper proposes a system which can capture a dynamic 3D scene and synthesize its arbitrary views in real time. Our system consists of four components: a fresnel lens, a micro-lens array, an IEEE1394 digital camera, and a PC for rendering purpose. The micro-lens array forms an image which consists of a set of elemental images, in other words, multiple viewpoint images of the scene. The fresnel lens controls the depth of field by demagnifying the 3D scene. The problem is that the scene demagnified by the fresnel lens is compressed along its optical axis. Therefore, we propose a method for recovering the original scene from the compressed scene. The IEEE1394 digital camera captures multiple viewpoint images at 15 frames per second, and transfers these images to the PC. The PC synthesizes any perspective of the captured scene from the multiple viewpoint images using image-based rendering techniques. The proposed system synthesizes one perspective of the captured scene within 1/15 second. This means that a user can interactively move his/her viewpoint and observe even a moving object from various directions.
In the field of 3-D imaging technology, Integral Photography (IP) is one of the promising approaches, and a combination of an HDTV camera and an optical fiber array has been investigated to display 3-D live video sequences. The authors have applied this system to a computer graphics method for synthesizing arbitrary views from IP images: a method of interactively displaying free-viewpoint images without physical lens array. This paper proposes a real-time method of estimating depth data corresponding to each element image on an IP image. Experimental results show that the proposed method is very useful for improving the quality of the free-viewpoint image synthesis.