Detection of oil palm tree provides necessary information for monitoring oil palm plantation and predicting palm oil yield. The supervised model, like deep neural network trained by remotely sensed images of the source domain, can obtain high accuracy in the same region. However, the performance will largely degrade if the model is applied to a different target region with another unannotated images, due to changes in relation to sensors, weather conditions, acquisition time, etc. In this paper, we propose a domain adaptation based approach for oil palm detection across two different high-resolution satellite images. With manually labeled samples collected from the source domain and unlabeled samples collected from the target domain, we design a domain-adversarial neural network that is composed of a feature extractor, a class predictor and a domain classifier to learn the domain-invariant representations and classification task simultaneously during training. Detection tasks are conducted in six typical regions of the target domain. Our proposed approach improves accuracy by 25.39% in terms of F1-score in the target domain, and performs 9.04%-15.30% better than existing domain adaptation methods.
The effective detection of urban development is the basis of understanding urban sustainability. Although various studies concentrated on long-time-series analysis on urban development, the resolution of images was too low to focus on a single object. In this paper, we provide a long-time-series analysis of built-up areas at an annual frequency in Beijing, China, from 2000 to 2015, based on the automatic building extraction and high-resolution satellite images. We propose a deeplearning based method to extract buildings, and employ an ensemble learning method to improve the localization of boundaries. The time-series results of built-up areas are analyzed based on two schemes, i.e., change detection over the past fifteen years and evaluation of the whole region in three selected years. Our proposed method achieves an average overall accuracy (OA) of 93%. The results reveal that Beijing developed more rapidly during 2001-2008 than other periods in terms of the density and the number of buildings.
Oil palm tree detection is of great significance for improving the irrigation, estimating the yield of palm oil, and predicting the expansion trend, etc. Existing tree detection methods include traditional image processing, machine learning methods, and sliding window based deep learning methods. In this paper, we proposed a deep learning based end-to-end method for oil palm detection in large scale. First, we built an oil palm sample dataset from 0.1m-resolution Unmanned Aerial Vehicle (UAV) images. Second, we implemented five state-of-the-art object detection algorithms (i.e. Faster- RCNN, VGG-SSD, YOLO-v3, RetinaNet and Mobilenet-SSD) and evaluated their performances for detecting the tree crown size and the location of oil palms. Moreover, we designed an overlapping partition method to improve the oil palm detection results of the UAV images in over 40,000 × 40,000 pixels. Experiment results demonstrate that in terms of the detection accuracy, VGG-SSD achieves the best accuracy of 90.91% on the validation dataset, followed by YOLO-v3, RetinaNet, Mobilenet-SSD and Faster RCNN. Meanwhile, we compared the detection time of the five object detection algorithms. Mobilenet-SSD achieves the highest detection speed among five algorithms (12.81ms per image in 500×500 pixels), with the speedup ratios of 17.5×, 10.2×, 4.51×, and 17.33× compared with Faster-RCNN, VGG-SSD, YOLO-v3 and RetinaNet. The results show that our proposed oil palm detection method is of great practical value to the precision agriculture of the oil palm industry.
Detecting oil palm plantation from high-resolution satellite images can provide the necessary information for palm oil production estimation and oil palm plantation layout planning, etc. In this paper, we proposed a novel semantic segmentation based approach for large-scale oil palm plantation detection using QuickBird images and Google Earth Images (in 0.6-m spatial resolution) in Malaysia. We manually labeled a dataset for pixel-wise semantic segmentation into four categories: oil palm plantation, other vegetation, impervious/cloud, and the others (e.g. water and uncertain pixels). We presented an end-to-end deep convolutional neural network (DCNN) for semantic segmentation followed by fully connected conditional random fields (CRF) and applied an ensemble learning method to improve the localization of boundaries. The overall accuracy and mean IoU of our proposed approach in test regions are 95.27% and 88.46%, which are greatly better than the results of the other three common semantic segmentation methods and patch-based CNN method.