Paper
15 March 2019 Deep learning for UAV autonomous landing based on self-built image dataset
Yinbo Xu, Yongwei Zhang, Huan Liu, Xiangke Wang
Author Affiliations +
Proceedings Volume 11041, Eleventh International Conference on Machine Vision (ICMV 2018); 110412I (2019) https://doi.org/10.1117/12.2522751
Event: Eleventh International Conference on Machine Vision (ICMV 2018), 2018, Munich, Germany
Abstract
An end-to-end deep learning (DL) control model is proposed to solve autonomous landing problem of the quadrotor in way of supervised learning. Traditional methods mainly focus on getting the relative position of the quadrotor through GPS signal which is not always reliable or position-based vision servo (PBVS) methods. In this paper, we have constructed a deep neural network based on convolutional neural network(CNN) whose input is raw image. A monocular camera is used as only sensor to capture down-looking image which contains landing area. To train our deep neural network, we have used our self-built image dataset. After training phase, the well-trained control model is tested and the results perform well. Light changes and background interferences have little influence on the model`s performance, which shows the robustness and adaptation of our deep learning model.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yinbo Xu, Yongwei Zhang, Huan Liu, and Xiangke Wang "Deep learning for UAV autonomous landing based on self-built image dataset", Proc. SPIE 11041, Eleventh International Conference on Machine Vision (ICMV 2018), 110412I (15 March 2019); https://doi.org/10.1117/12.2522751
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Unmanned aerial vehicles

Neural networks

RGB color model

Convolution

Cameras

Image sensors

Light sources and illumination

Back to Top