In this paper a new method for the autostereoscopic display, named the Dual Layer Parallax Barrier (DLPB) method, is
introduced to overcome the limitation of the fixed viewing zone. Compared with the conventional parallax barrier
methods, the proposed DLPB method uses moving parallax barriers to make the stereoscopic view changed according to
the movement of viewer. In addition it provides seamless stereoscopic views without abrupt change of 3D depth feeling
at any eye position. We implement a prototype of the DLPB system which consists of a switchable dual-layered Twisted
Nematic Liquid Crystal Display (TN-LCD) and a head-tracker. The head tracker employs a video camera for capturing
images, and is used to calculate the angle between the eye gazing direction and the projected direction onto the display
plane. According to the head-tracker's control signal, the dual-layered TN-LCD is able to alternate the direction of
viewing zone adaptively by a solid-state analog switch. The experimental results demonstrate that the proposed
autostereoscopic display maintains seamless 3D views even when a viewer's head is moving. Moreover, its extended use
towards mobile devices such as portable multimedia player (PMP), smartphone, and cellular phone is discussed as well.
In this paper, we suggested a new way to overcome a shortcoming as stereoscopic depth distortion in common
stereoscopy based on computer graphics (CG). In terms of the way, let the objective space transform as the
distorted space to make a correct perceived depth sense as if we are seeing the scaled object volume which is
well adjusted to user's stereoscopic circumstance. All parameters which related the distortion such as a focal
length, an inter-camera distance, an inner angle between camera's axes, a size of display, a viewing distance and
an eye distance can be altered to the amount of inversed distortion in the transformed objective space by the
linear relationship between the reconstructed image space and the objective space. Actually, the depth distortion
is removed after image reconstruction process with a distorted objective space. We prepared a stereo image
having a right scaled depth from -200mm to +200mm with an interval as 100mm by the display plane in an
official stereoscopic circumstance and showed it to 5 subjects. All subjects recognized and indicated the
We present a depth map-based disparity estimation algorithm using multi-view and depth camera system. When many objects are arranged in the 3D space with a long depth range, the disparity search range should be large enough in order to find all correspondences. In this case, traditional disparity estimation algorithms that use the fixed disparity search range often produce mismatches if there are pixels that have similar color distribution and similar textures along the epipolar line. In order to reduce the probability of mismatch and save computation time for the disparity estimation, we propose a novel depth map-based disparity estimation algorithm that uses a depth map captured by the depth camera for setting the disparity search range adaptively as well as for setting the mid-point of the disparity search range. The proposed algorithm first converts the depth map into disparities for the stereo image pair to be matched using calibrated camera parameters. Next, we set the disparity search range for each pixel based the converted disparity. Finally, we estimate a disparity for each pixel between stereo images. Simulation results with various test data sets demonstrated that the proposed algorithm has better performance in terms of the smoothness, global quality and computation time compared to the other algorithms.
This paper presents a novel multi-depth map fusion approach for the 3D scene reconstruction. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. On the other hand, Depth map obtained from the depth camera is globally accurate but noisy and provides a limited depth range. In order to compensate pros and cons of these two methods, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. Using a 3-view camera system that includes a depth camera for the center-view, we first obtain 3-view images and a depth map from the center-view depth camera. Then we calculate camera parameters by camera calibration. Using the camera parameters, we rectify left and right-view images with respect to the center-view image for satisfying the well-known epipolar constraint. Using the center-view image as a reference, we obtain two depth maps by stereo matching between the center-left image pair and the center-right image pair. After preprocessing each depth map, we pick an appropriate depth value for each pixel from the processed depth maps based on the depth reliability. Simulation results obtained by our proposed method showed improvements in some background regions.
In stereoscopic television, there is a trade-off between visual comfort and 3D impact with respect to the baseline-stretch of 3D camera. It has been reported that an optimal condition can be reached when we set the baseline-stretch at about the distance of human pupils1. However, we cannot get such distance in case that the sizes of the lens and CCD module are big. In order to overcome this limitation, we attempt to control the baseline-stretch of stereoscopic camera by synthesizing virtual views at the desired location of interval between two cameras. Proposed technique is based on the stereo matching and view synthesis techniques. We first obtain a dense disparity map using a hierarchical stereo matching with the edge-adaptive shifted window. And then we synthesize virtual views using the disparity map. Simulation results with various stereoscopic images demonstrate the effectiveness of the proposed technique.
Current binocular stereoscopic displays cause visual discomfort when the objects with large disparities are present in the scene. With this technique, the improvement of visual comfort has been reported by blurring far background and foregrounds in the scene. However, this technique has a drawback of degrading overall image quality. To lesson visual discomfort caused by large disparities while maintaining high-perceived image quality, we use a novel disparity-based asymmetrical filtering technique. Asymmetrical filtering, which refers to the filtering applied to the image of one eye only, has been showen to maintain the sharpness of a stereoscopic image, provided that the amount of filtering is low. Disparity-based asymmetrical filtering usese the disparity information in a stereoscopic image for controlling the severity of blurring. We investigated the effects of this technique on stereoscopic video by measuring visual comfort and apparent sharpness. Our results indicate that disparity-based asymmetrical filtering does not always improve visual comfort but it maintains image quality.