We propose a calibration method for automotive augmented reality head-up displays (AR-HUD) using a chessboard pattern and warping maps. The HUD is modeled as a pinhole camera whose intrinsic parameters are determined by employing a stereo method. We select several viewpoints within the driver’s eye box and place a smartphone at each of them in sequence, whose position is sensed by a head tracker. By automatically shifting 2D points on the HUD virtual image to 3D chessboard corners within the view of the smartphone camera, we obtain a group of 2D–3D correspondences and then compute view-dependent extrinsic parameters. Using these parameters, we reproject the chessboard corners back to the virtual image. Comparing the results with measured virtual points, we acquire 2D distributions of biases, from which we reconstruct a series of warping maps as a tool for compensating optical distortions. For any other uninvolved viewpoint in the eye box, we obtain its corresponding extrinsic parameters and warping maps through interpolation. Our method outperforms the existing ones in terms of modeling complexity as well as experimental workload. The reprojection errors at 7.5 m distance fall within a few millimeters, which indicates a high augmentation accuracy. Besides, we calibrate the head tracker by utilizing the acquired extrinsic parameters and viewpoint tracking results.
In this paper, we present a novel 3D scene reconstruction framework from a single front-mounted stereo camera on a moving vehicle. We propose image triangulations to efficiently render a 3D scene only from 2D textures, while introducing tube meshes as an effective way to render out-of-frustum points. Furthermore, we derive a 3D extended Kalman filter to fuse stereo estimates temporally between frames and showcase a render pipeline, which exploits OpenGL shaders to offload computational costs from the CPU to the GPU. Our approach is able to increase the stereo accuracy compared to competing approaches on the KITTI visual odometry dataset. We also introduce a challenging view prediction evaluation scenario on the SYNTHIA dataset, in which our approach comes out on top in terms of SSIM, 1-NCC error and completeness.
In this paper, we propose a novel method for the 3D reconstruction of urban scenes using a front-facing stereo camera in a vehicle. Our point-based approach warps an active region of the reconstructed point cloud to the next frame and uses an extended information filter for the temporal fusion of clustered disparity estimates in pixel bins. We splat the information of projected pixels according to subpixel weights and discard uncertain points. This method allows us to remove redundant points required for the reconstruction and at the same time presents a significantly denser model than competing approaches with improved disparity estimates. Our approach avoids common visual artifacts like spurious objects in the reconstruction. This results in a reconstruction with higher visual fidelity compared to other approaches, which is important for immersive applications. We compare our proposed system to other approaches in a quantitative and qualitative evaluation on the KITTI odometry dataset.