In visual SLAM, UAV position is estimated on the basis of stereo vision (localization), and 3D points are mapped on the basis of the estimated UAV position (mapping). Processing is implemented sequentially between localization and mapping. Finally, all the UAV positions are estimated and an integrated 3D map is created. For any given iteration in the sequential processing, there will be estimation error, but in the next iteration, the previous estimated position will be used as a base position regardless of this error. As a result, error accumulates until the UAV returns to a location it passed before. Our research aims to mitigate this problem. We propose two new methods.
(1) Accumulated error caused by local matching with sequential low-altitude images (i.e. close-up photos) is corrected with global-matching between low- and high-altitude images. To perform global-matching that is robust against error, we implemented a method wherein the expected matching areas are narrowed down on the basis of UAV position and barometric altimeter measurements.
(2) Under the assumption that absolute coordinates include axis-rotation error, we proposed an error-reduction method that minimizes the difference in the UAVs’ altitude between the visual SLAM and sensor (bolometer and thermometer) results.
The proposed methods reduced accumulated error by using high-altitude images and sensors. Our methods improve the accuracy of UAV- and object-position estimation.