In this paper, we introduce a newly developed target-free calibration method for automotive augmented reality head-up displays (AR-HUDs), which can be performed fully automatically using a smartphone camera. Our method requires no calibration target to be set up in front of the vehicle. Instead, it utilizes feature points of the environment, which makes it robust against misplaced targets and allows for an easy deployment, i.e. in garages. Under the pinhole model assumption, we decouple the perspective projection matrix into three parts: intrinsic matrix, relative pose between the vehicle’s 3D sensor and the smartphone camera, and then rotation between the camera space and the HUD field of view (HUD-FOV). Based on the epipolar constraint, we acquire the relative pose. The determination of intrinsic and rotation matrices is also accomplished without any pre-designed calibration target. The calibration itself takes less than 5 minutes for an eye box with 9 different training viewpoints. With our new approach, we achieve a competitive average reprojection error of 6.7 mm at a distance of 7.5 m, which is comparable to the previous work that applied targets.
Automotive augmented reality head-up displays (AR-HUDs) superimpose driving related information with the real world in the direct sight of the driver. A key prerequisite for an immersive AR experience is a highly precise calibration. State-of-the-art methods require large targets and a lot of space in front of the vehicle, or special complex equipment, which is inconvenient in both factories and workshops. In this paper, we propose a low-complexity yet accurate calibration method using only a small sheet of patterned paper as the target, which is laid directly on the windscreen. The full field of view (FOV) can be calibrated, with the optical distortion corrected by extracted warping maps. The changing views of drivers are considered by interpolating both projection parameters and distortion models. The angular reprojection error falls within 0.04°, while the run-time is limited up to 1 minute per viewpoint. Our method shows high applicability in the automotive industry because of both reduced target complexity and competitive reprojection errors. Moreover, due to the reduced effort and simplified equipment, our method opens a way for customers to recalibrate their AR-HUDs themselves.
In this paper, we present a novel 3D scene reconstruction framework from a single front-mounted stereo camera on a moving vehicle. We propose image triangulations to efficiently render a 3D scene only from 2D textures, while introducing tube meshes as an effective way to render out-of-frustum points. Furthermore, we derive a 3D extended Kalman filter to fuse stereo estimates temporally between frames and showcase a render pipeline, which exploits OpenGL shaders to offload computational costs from the CPU to the GPU. Our approach is able to increase the stereo accuracy compared to competing approaches on the KITTI visual odometry dataset. We also introduce a challenging view prediction evaluation scenario on the SYNTHIA dataset, in which our approach comes out on top in terms of SSIM, 1-NCC error and completeness.
We propose a calibration method for automotive augmented reality head-up displays (AR-HUD) using a chessboard pattern and warping maps. The HUD is modeled as a pinhole camera whose intrinsic parameters are determined by employing a stereo method. We select several viewpoints within the driver’s eye box and place a smartphone at each of them in sequence, whose position is sensed by a head tracker. By automatically shifting 2D points on the HUD virtual image to 3D chessboard corners within the view of the smartphone camera, we obtain a group of 2D–3D correspondences and then compute view-dependent extrinsic parameters. Using these parameters, we reproject the chessboard corners back to the virtual image. Comparing the results with measured virtual points, we acquire 2D distributions of biases, from which we reconstruct a series of warping maps as a tool for compensating optical distortions. For any other uninvolved viewpoint in the eye box, we obtain its corresponding extrinsic parameters and warping maps through interpolation. Our method outperforms the existing ones in terms of modeling complexity as well as experimental workload. The reprojection errors at 7.5 m distance fall within a few millimeters, which indicates a high augmentation accuracy. Besides, we calibrate the head tracker by utilizing the acquired extrinsic parameters and viewpoint tracking results.
In this paper, we propose a novel method for the 3D reconstruction of urban scenes using a front-facing stereo camera in a vehicle. Our point-based approach warps an active region of the reconstructed point cloud to the next frame and uses an extended information filter for the temporal fusion of clustered disparity estimates in pixel bins. We splat the information of projected pixels according to subpixel weights and discard uncertain points. This method allows us to remove redundant points required for the reconstruction and at the same time presents a significantly denser model than competing approaches with improved disparity estimates. Our approach avoids common visual artifacts like spurious objects in the reconstruction. This results in a reconstruction with higher visual fidelity compared to other approaches, which is important for immersive applications. We compare our proposed system to other approaches in a quantitative and qualitative evaluation on the KITTI odometry dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.