From Event: SPIE Defense + Commercial Sensing, 2019
Vehicle detection in aerial imagery has become tremendously a challenging task due to the low resolution characteristics of the aerial images. Super-Resolution; a technique which recovers high-resolution image from a single low-resolution image can be an effective approach to resolve this shortcoming. Hence, our prime focus is to design a framework for detecting vehicles in super resolved aerial images. Our proposed system can be represented as a combination of two deep sub-networks. The first sub-network aims to use a Generative Adversarial Network (GAN) for getting super resolved images. A GAN consists of two networks: a generator network and a discriminator network. It ensures recovery of photo-realistic images from down-sampled images. The second sub-network consists of a deep neural network (DNN)-based object detector for detecting vehicles in super resolved images. In our architecture, the Single Shot Multi Box Detector (SSD) is used for vehicle detection. The SSD generates fixed-size bounding boxes with predicting scores for different object class instances in those boxes. It also employs a non-maximum suppression step to produce final detections. In our algorithm, our deep SSD detector is trained with the predicted super resolved images and its performance is then compared with an SSD detector that is trained only on the low-resolution images. Finally, we compare the performance of our proposed pre-trained SSD detector on super-resolved images with an SSD that is trained only on the original high resolution images.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Syeda Nyma Ferdous, Moktari Mostofa, and Nasser M. Nasrabadi, "Super resolution-assisted deep aerial vehicle detection ," Proc. SPIE 11006, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 1100617 (Presented at SPIE Defense + Commercial Sensing: April 17, 2019; Published: 10 May 2019); https://doi.org/10.1117/12.2519045.