In this study, a general transformation-based framework for unmanned aerial vehicle image stitching is proposed that can resist global distortion while improving the local registration accuracy. In the first step, with tie points as constraints, the global transformation function of each image is obtained in an optimization manner and no reference image is needed. In the second step, to reduce data redundancy, an image selection algorithm based on information entropy is proposed. The optimal image combination covering the entire scene is selected from the original image set. In the third step, a local optimization algorithm based on mesh deformation is proposed to further optimize the registration accuracy of the optimal image combination. Finally, all of the images are combined to obtain a high-resolution panorama. On challenging datasets, the proposed algorithm can not only reduce the global distortions caused by the accumulation of errors during image stitching, but can also reduce the redundant data, which will benefit the postprocessing. The local mesh optimization can greatly improve the registration accuracy and eliminate the obvious misalignment problems. Tested on a large number of challenging datasets, the proposed method substantially outperforms several state-of-the-art image-stitching methods and commercial software. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 4 scholarly publications.
Image registration
Distortion
Unmanned aerial vehicles
Image quality
Panoramic photography
Composites
Cameras