An integral imaging system using a polygon model for a real object is proposed. After depth and color data of the real object are acquired by a depth camera, the grid of the polygon model is converted from the initially reconstructed point cloud model. The elemental image array is generated from the polygon model and directly reconstructed. The polygon model eliminates the failed picking area between the points of a point cloud model, so at least the quality of the reconstructed 3-D image is significantly improved. The theory is verified experimentally, and higher-quality images are obtained.
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system’s angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
A depth camera has been used to capture the depth data and color data for real-world objects. As an integral imaging display system is broadly used, the elemental image array for the captured data needs to be generated and displayed on liquid crystal display. We proposed a real-time integral imaging display system using image processing to simplify the optical arrangement and graphics processing unit parallel processing to reduce the time for computation. The proposed system provides elemental images generated at a rate of more than 30 fps with a resolution of 1204×1204 pixels , where the size of each display panel pixel was 0.1245 mm, and an array of 30×30 lenses , where each lens was 5×5 mm .
We propose full-parallax integral imaging display with 360 degree horizontal viewing angle. Two-dimensional (2D)
elemental images are projected by a high-speed DMD projector and integrated into three-dimensional (3D) image by a
lens array. The anamorphic optic system tailors the horizontal and vertical viewing angles of the integrated 3D images in
order to obtain high angular ray density in horizontal direction and large viewing angle in vertical direction. Finally, the
mirror screen that rotates in synchronization with the DMD projector presents the integrated 3D images to desired
direction accordingly. Full-parallax and 360 degree horizontal viewing angle 3D images with both of monocular and
binocular depth cues can be achieved by the proposed method.
We propose a new synthesis method for the hologram of 3D objects using multiple orthographic view images captured
by lens array. The 3D objects are captured through a lens array under normal incoherent illumination, and their multiple
orthographic view images are generated from the captured image. Each orthographic view image is numerically
overridden by the plane wave propagating at the direction of the corresponding projection angle and integrated into a
single complex value, which constitutes one pixel in the synthesized hologram. By repeating this process for all
orthographic view images, we can generate the Fourier hologram of the 3D objects. Since the proposed method generates
the hologram not from the interference with the reference beam, but from the multiple view images, coherent system is
not required. The manipulation of the 3D information of the objects is also easily achieved in the proposed method. By
manipulating coordinate information of each orthographic view image according corresponding view angle, the depth
order of the reconstructed 3D object can be controlled.