We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display
conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an
optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that
comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468
dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal
length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned
60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth
positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15
and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object
display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions.
The results suggest that the IP image induced more natural accommodation responses compared to the binocular
stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however,
they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce
accommodation to the depth positions of 3D images.
We have been researching three-dimensional (3D) reconstruction from images captured by multiple cameras. Currently,
we are investigating how to convert 3D models into stereoscopic images. We are interested in integral photography (IP),
one of many stereoscopic display systems, because the IP display system reconstructs complete 3D auto-stereoscopic
images in theory. This system consists of a high-resolution liquid-crystal panel and a lens array. It enables users to obtain
a perspective view of 3D auto-stereoscopic images from any direction. We developed a method for converting 3D
models into IP images using the OpenGL API. This method can be applied to normal CG objects because the 3D model
is described in a CG format. In this paper, we outline our 3D modeling method and the performance of an IP display
system. Then we discuss the method for converting 3D models into IP images and report experimental results.
In this paper, we present a high-resolution dynamic 3D object generating method from multi-viewpoint images. This dynamic 3D object can display fine images of the moving human body form arbitrary viewpoints, and consists of subject's 3D model generated for each video frame. To create a high-resolution dynamic 3D object, we propose a 3D-model-generation method from multi-viewpoint images. The method uses stereo matching to refine an approximate 3D model obtained by the volume intersection method. Furthermore, to reproduce high-resolution textures, we have developed a new technique which obtains the visibility of vertices and polygons of 3D models. A modeling experiment performed with 19 fire-wire cameras confirmed that the proposed method effectively generates high-resolution dynamic 3D objects.
In this paper, we propose a system for generating arbitrary viewpoint images. The system is based on image measurement and consists of three steps: HDTV image recording, modeling from images and displaying arbitrary viewpoint images. The model data is converted to VRML models. In order to estimate 3D shapes, we developed a new modeling algorithm using the block matching method as well as the volume intersection method. The proposed algorithm achieves fast and precise modeling. It is confirmed that the derived human model with motion can be smoothly played in a VRML browser on a PC, and the viewing position of the observer can be changed by mouse control.