Volume intersection (VI) is a successful technique for reconstructing 3-D shapes from 2-D images (silhouettes) of multiple views. It consists of intersecting the cones formed by back-projecting each silhouette. The 3-D shapes reconstructed by VI are called visual hull (VH). In this paper we propose a fast method obtaining the VH. The
method attempts to reduce the computational cost by using a run representation for 3-D objects called SPXY table that is previously proposed by us. It makes cones by back-projecting the 2-D silhouettes to the 3-D space through the centers of the lens and intersects them keeping the run representation. To intersect the cones of multiple views keeping the run representation, we must align the direction of runs representing the cones. To align them we use the method of
swapping two axes of a run-represented object at the time cost of O(n) where n is a number of runs, which is also previously proposed by us. The results of experiments using VRML objects such as human bodies show that the proposed method can reconstruct a 3-D object in less than 0.17 s at the resolution of 220 × 220 × 220 voxels from a set of silhouettes of 8 viewpoints on a single CPU.
Volume intersection is one of the simplest techniques for reconstructing 3D shapes from 2D silhouettes. 3D shapes can be reconstructed from multiple view images by back-projecting them from the corresponding viewpoints and intersecting the resulting solid cones. The camera position and orientation (extrinsic camera parameters) of each viewpoint with respect to the object are needed to accomplish reconstruction. However, even a little variation in the
camera parameters makes the reconstructed 3D shape smaller than that with the exact parameters. The problem of optimizing camera parameters dealt with in this paper is determining good approximations from multiple silhouette images and imprecise camera parameters. This paper examines attempts to optimize camera parameters by reconstructing a 3D shape via the method of volume intersection. Reprojecting the reconstructed 3D shape to image
planes, the camera parameters are determined by finding the projected silhouette images that result in minimal loss of area when compared to the original silhouette images. For relatively large displacement of camera parameters we propose a method repeating the optimization using dilated silhouettes which gradually shrink to original ones. Results of experiment show the effect of it.