In evaluating 3 dimensional images, the depth resolution which the image can provide, is an important parameter defining the image quality. The depth resolution obtainable with 3 dimensional images is mainly determined by the parameters of cameras used for multiview image acquisitions and viewer's eye resolution.
The depth resolution and the possible image depth range, obtainable with parallel, toed-in and sliding aperture camera configurations for multiview image acquisition in the 3-dimensional imaging systems are calculated by assuming that the focusing beam is diffraction limited and that a pixel pitch of the imaging sensor is a limiting image spot size. The calculation reveals that these parameters are essentially the same for the configurations considered. For the depth resolution comparison with a hologram with a size corresponding to the aperture size of camera objective, it is shown that the hologram provides better depth resolved image than the multiview systems. The depth resolution of the images in the the 3-D imaging systems is further reduced by viewer's eye resolution. The amount of the reduction is proportional to the number of picture elements in the eye resolution spot.
We are developing a next-generation 3D display which offers natural 3D images by projecting a number of directional images. We have already demonstrated a prototype 3D display which generates 64 directional images simultaneously by using 64 LCD panels. In this study we developed a 3D interactive processor which can process 64 directional images simultaneously by using a PC cluster. The PC cluster consisted of 64 small PC's. The image synchronization mechanism was implemented to update 64 directional images at the same time. We made the commands to control the system and the API used for programming. The processor can display moving 3D images with 30 fps. The 3D images can be interactively updated according to the 3D mouse. We also developed the interpreter programs for the DXF and VRML data so that the 3D interactive processor can deal widely-used 3D data in the DXF and VRML formats.
A large screen 2D display used at stadiums and theaters consists of a number of pixel modules. The pixel module usually consists of 8x8 or 16x16 LED pixels. In this study we develop a 3D pixel module in order to construct a large screen 3D display which is glass-free and has the motion parallax. This configuration for a large screen 3D display dramatically reduces the complexity of wiring 3D pixels. The 3D pixel module consists of several LCD panels, several cylindrical lenses, and one small PC. The LCD panels are slanted in order to differentiate the distances from same color pixels to the axis of the cylindrical lens so that the rays from the same color pixels are refracted into the different horizontal directions by the cylindrical lens. We constructed a prototype 3D pixel module, which consists of 8x4 3D pixels. The prototype module is designed to display 300 different patterns into different horizontal directions with the horizontal display angle pitch of 0.099 degree. The LCD panels are controlled by a small PC and the 3D image data is transmitted through the Gigabit Ethernet.
We propose a software-based minimum-time vergence control scheme
(MTVCS) for a parallel-axis stereoscopic camera (PASC). First, a
global horizontal disparity is estimated by using modified
binocular energy models and transformed stereoscopic images via
Radon Transform with a specified angle parameter. Second, with the
estimated global disparity, the actual disparity command is
derived through a nonlinear function such that the resulting
horizontal disparity is equal to the command exactly with the
control in a fastest time interval. Through experimental results,
we will show that the proposed MTVCS achieves better tracking and
regulating performances than those of the previous scheme.
Since the development of high-brightness blue and green LEDs, the use of outdoor commercial LED displays has been increasing. Because of their high brightness, good visibility, and long-term durability to the weather, LED displays are a preferred technology for outdoor installations such as stadiums, street advertising, and billboards. This paper deals with a large stereoscopic full-color LED display by use of a parallax barrier. We discuss optimization of the viewing area, which depends on LED arrangements. An enlarged viewing area has been demonstrated by using a 3-in-1 chip LED panel that has wider black regions than ordinary LED lamp cluster panels. We have developed a real-time measurement system of a viewer's position and utilized the measurement system for evaluation of performance of the different designs of stereoscopic LED displays, including conventional designs to provide multiple perspective images and designs to eliminate pseudoscopic viewing areas. In order to show real-world images, it is necessary to capture stereo-images, to process them, and to show in real-time. We have developed an active binocular camera and demonstrated the real-time display of stereoscopic movies and real-time control of convergence.
This paper presents a novel method for an all-around display system that shows three-dimensional stereo images without any special goggles. This system simply needs a directional-reflection screen, mirrors, and a standard projector. The basic concept behind this system is to make use of the phenomenon called "afterimage" that occurs when screen is spinning. The key to this approach is to make a directional reflection screen with a limited viewing angle and project images onto it. The projected image is made up of 24 images of an object, taken from 24 different angles. By reconstructing this image, a three-dimensional object can be displayed on the screen. The display system can present images of computer-graphics and photographs, full-length movies, and so on. This paper shows several display examples which demonstrate that the system will be useful in applications such as guide displays in public places and facilities.
This paper describes recent advances in a number of R&D areas that are believed to provide 'key technologies' for the further development of a novel, digital, broadcast 3D-TV system. The provided results are part of the outcome of the European IST project ATTEST (Advanced Three-Dimensional Television System Technologies), a two-year research initiative that was finalized in March 2004. The paper covers some essential parts of the envisaged 3D signal processing chain such as the real-time generation of "virtual" stereoscopic views from monoscopic color video and associated per-pixel depth information as well as the efficient compression and the backwards-compatible transmission of this advanced data representation format using state-of-the-art video coding standards such as MPEG-2 (color data) and MPEG-4 Visual, resp. Advanced Video Coding (depth data). Furthermore, the paper also describes the development of a new, single-user autostereoscopic 3D-TV display (Free2C). This novel, high-quality 3D device utilizes a lenticular lens raster to separate two individual perspective views, which are presented simultaneously on an underlying LC panel. To provide the user with a satisfying 3D reproduction within a sufficiently large viewing area - a major problem for many state-of-the-art autostereoscopic 3D displays - the lenticular is constantly readjusted according to the viewer's actual head position, which is measured by a highly accurate, video-based tracking system. This approach allows for a variation of the viewing distance within a range of 400 mm to 1100 mm as well as horizontal head movements within a range of about ±30°. The feasibility of the new 3D-TV concept is proved through extensive human factors evaluations of the before-described algorithms and components.
Conventional integral imaging systems utilize lenslet arrays with fixed focal lengths and aperture sizes. In this paper, we propose a time-multiplexed integral imaging method to enhance both the depth of focus and the resolution of a three-dimensional image by displaying it using an array of lenslets with different focal lengths and aperture sizes. The non-uniform lenslet parameters (focal lengths and aperture sizes) for our method are calculated. Our theoretical analysis indicates that significant improvements in both depth of focus and resolution can be obtained using the proposed technique. To the best of our knowledge, this is a first report on developing integral imaging systems using lenslets with non-uniform focal lengths and aperture sizes.
An afocal lens array is proposed to form three-dimensional (3D) images. The array, which is composed of many afocal optical units, can form an image whose depth position is dependent on the angular magnification of the unit. The point of an image formed by the whole array differs from that where an image is formed by an afocal unit, except in the case that the angular magnification is 1.0. Especially, when the angular magnification has a negative value, an optical image has a negative longitudinal magnification, i.e., a 3D image with inverted depth. When used for integral imaging, the array can control the depth position and avoid pseudoscopic images with reversed depth.
Recently, because the high-speed network system known as broadband internet has become widespread, it is now possible to transmit various forms of mulitmedia data using this system. We found several research reports for the transmitting visual images in the network streaming process. However, there seemed to be few reports on transmission of holographic 3D movies. We present a transmitting process of holographic 3D movies adopting network-streaming techniques and a hologram in which 3D objects were recorded as fringe patterns. When we applied this method, we could achieve excellent transmission of holographic 3D movies. Moreover, we could reconstruct good holographic images using transmitted streaming data. From this result, it seemed possible to develop new transmitting processes of 3D moving data using well-known conventional techniques.
In-line digital holography using a quarter wave plate and averaging technique is described in this paper. In-line digital holographic scheme inherently suffers from DC and conjugate object image noise terms. This problem can be overcome by using phase shifting technique which usually requires four digital holograms. However, it requires four successive holograms, which makes it less robust in relatively poor environment than one-shot off axis scheme. This study is on two-exposure method employing only a quarter wave plate which can produce 90 degree phase difference between two successive holograms. The two-exposure method can provide conjugate-free reconstruction capability. Also, the DC terms can be minimized by applying averaging technique for object DC term. Detail explanation on how to reduce the DC term noise and how the intensity level of reference and object wave can affect the reconstructed image quality is discussed. Although the reconstructed image quality has a little undesired background noise compared with the conventional four frame method, it has a benefit in the sense that it requires only two holograms for eliminating the DC and the conjugate terms. The numerically reconstructed results show the feasibility of the two-exposure method and it may be capable of providing more robust approach in on-axis scheme.
This paper describes a method of generating holograms by calculation from an image captured using the integral photography (IP) technique. In order to reduce the calculation load in hologram generation, a new algorithm that shifts the optical field along the exit plane of the micro lenses in a lens array is proposed. We also explain the aliasing that occurs when a hologram is generated by IP. Furthermore, an elemental image size and micro lens's focal length at which aliasing does not occur are suggested. Finally, we use the algorithm to calculate a hologram from an IP image of a real object captured with an IP camera, confirming by optical reconstruction that a three-dimensional image can be formed from the hologram.
In order to develop a holographic HMD-type holography 3D-TV with a full parallax, it seems to be very useful to study various techniques of virtual image reconstruction sufficiently. In this paper, a new, full color electro-holographic display system equipped with LED is presented. Virtual image reconstructing technique is applied, and full color, relatively highly contrasted 3D images are obtained using this system. We consider about the condition for realization of HMD-type holography 3D-TV using this technique because this techniques is very suitable for HMD system.
In this article is described a method for formal engineering design synthesis of three-dimensional (3D) autostereoscopic displays. A formal description of the design synthesis in question is provided, that converges toward completely automated design synthesis. In order to estimate whether the computer program and/or human designer, using the methodology described in this article, would be able to synthesize one or more designs of 3D displays of satisfactory quality, in reasonable time, the example of design synthesis, that describes several iterations of it, has been provided, that demonstrates that there is enough convergence in the merit function of the designs during the synthesis, and that the quality of the best design solutions that the methodology has generated is satisfactory. The optimization was performed with the goal of minimization of crosstalk and aberrations in the displayed image and maximization of the number of different views of the 3D image. Several 3D displays are designed as a result of this effort. The improvement of the image quality of one class of 3D displays, as a result of the decrease of the size of the samples in 3D image without increased crosstalk in the 3D image is also achieved in this article.
In this paper, we present a high-resolution dynamic 3D object generating method from multi-viewpoint images. This dynamic 3D object can display fine images of the moving human body form arbitrary viewpoints, and consists of subject's 3D model generated for each video frame. To create a high-resolution dynamic 3D object, we propose a 3D-model-generation method from multi-viewpoint images. The method uses stereo matching to refine an approximate 3D model obtained by the volume intersection method. Furthermore, to reproduce high-resolution textures, we have developed a new technique which obtains the visibility of vertices and polygons of 3D models. A modeling experiment performed with 19 fire-wire cameras confirmed that the proposed method effectively generates high-resolution dynamic 3D objects.
In a technique called depth image based rendering, new images are generated using information from an original source image and its corresponding depth map, such that the new images appear to have been taken from different camera viewpoints. This technique is bandwidth-efficient and is ideal for multiview display systems, such as autostereoscopic 3D-TV. In a previous study, we demonstrated that uniform smoothing of depth maps through Gaussian filtering helps improve the image quality of the rendered images. In the present study we investigated the potential benefits of two non-uniform smoothing methods: asymmetric smoothing, where the horizontal extent of smoothing was smaller than that in the vertical direction, and adaptive smoothing, where the level and extent of smoothing was based on the local depth magnitude. In this vein, ten viewers assessed image quality and depth quality of four stereoscopic images in which the view to one eye was a rendered image based on one of the three smoothing methods: uniform, asymmetric, or adaptive. The experimental results showed an improvement in ratings of image quality for all three methods as the level of smoothing was increased. The results also indicated a slight advantage in image quality for asymmetric smoothing over the other two methods. Ratings of overall depth quality were significantly higher than corresponding non-stereoscopic references for all three methods, although the ratings decreased at the highest level of smoothing that was used in the present study. In general, ratings of depth quality tended to be marginally lower for the asymmetric method.
A technique to improve the image quality of stereoscopic pictures generated from depth maps (depth image based rendering or DIBR) is examined. In general, there are two fundamental problems with DIBR: a depth map could contain artifacts (e.g., noise or "blockiness") and there is no explicit information on how to render newly exposed regions ("holes") in the rendered image as a result of new virtual camera positions. We hypothesized that smoothing depth maps before rendering will not only minimize the effects of noise and distortions in the depth maps but will also reduce areas of newly exposed regions where potential artifacts can arise. A formal subjective assessment of four stereoscopic sequences of natural scenes was conducted with 23 viewers. The stereoscopic sequences consisted of source images for the left-eye view and rendered images for the right-eye view. The depth maps were smoothed with a Gaussian blur filter at different levels of strength before depth image based rendering. Results indicated that ratings of perceived image quality improved with increasing levels of smoothing of the depth maps. Even though the depth maps were smoothed, a negative effect on ratings of overall perceived depth quality was not found.