Detection of oil palm tree provides necessary information for monitoring oil palm plantation and predicting palm oil yield. The supervised model, like deep neural network trained by remotely sensed images of the source domain, can obtain high accuracy in the same region. However, the performance will largely degrade if the model is applied to a different target region with another unannotated images, due to changes in relation to sensors, weather conditions, acquisition time, etc. In this paper, we propose a domain adaptation based approach for oil palm detection across two different high-resolution satellite images. With manually labeled samples collected from the source domain and unlabeled samples collected from the target domain, we design a domain-adversarial neural network that is composed of a feature extractor, a class predictor and a domain classifier to learn the domain-invariant representations and classification task simultaneously during training. Detection tasks are conducted in six typical regions of the target domain. Our proposed approach improves accuracy by 25.39% in terms of F1-score in the target domain, and performs 9.04%-15.30% better than existing domain adaptation methods.
Detecting oil palm plantation from high-resolution satellite images can provide the necessary information for palm oil production estimation and oil palm plantation layout planning, etc. In this paper, we proposed a novel semantic segmentation based approach for large-scale oil palm plantation detection using QuickBird images and Google Earth Images (in 0.6-m spatial resolution) in Malaysia. We manually labeled a dataset for pixel-wise semantic segmentation into four categories: oil palm plantation, other vegetation, impervious/cloud, and the others (e.g. water and uncertain pixels). We presented an end-to-end deep convolutional neural network (DCNN) for semantic segmentation followed by fully connected conditional random fields (CRF) and applied an ensemble learning method to improve the localization of boundaries. The overall accuracy and mean IoU of our proposed approach in test regions are 95.27% and 88.46%, which are greatly better than the results of the other three common semantic segmentation methods and patch-based CNN method.
Oil palm tree detection is of great significance for improving the irrigation, estimating the yield of palm oil, and predicting the expansion trend, etc. Existing tree detection methods include traditional image processing, machine learning methods, and sliding window based deep learning methods. In this paper, we proposed a deep learning based end-to-end method for oil palm detection in large scale. First, we built an oil palm sample dataset from 0.1m-resolution Unmanned Aerial Vehicle (UAV) images. Second, we implemented five state-of-the-art object detection algorithms (i.e. Faster- RCNN, VGG-SSD, YOLO-v3, RetinaNet and Mobilenet-SSD) and evaluated their performances for detecting the tree crown size and the location of oil palms. Moreover, we designed an overlapping partition method to improve the oil palm detection results of the UAV images in over 40,000 × 40,000 pixels. Experiment results demonstrate that in terms of the detection accuracy, VGG-SSD achieves the best accuracy of 90.91% on the validation dataset, followed by YOLO-v3, RetinaNet, Mobilenet-SSD and Faster RCNN. Meanwhile, we compared the detection time of the five object detection algorithms. Mobilenet-SSD achieves the highest detection speed among five algorithms (12.81ms per image in 500×500 pixels), with the speedup ratios of 17.5×, 10.2×, 4.51×, and 17.33× compared with Faster-RCNN, VGG-SSD, YOLO-v3 and RetinaNet. The results show that our proposed oil palm detection method is of great practical value to the precision agriculture of the oil palm industry.
A fiber-optic extrinsic Fabry-Perot interferometer (EFPI) and fiber Bragg grating (FBG) based pressure sensor system is designed. Using polyimide coated optical fiber, this sensor works in harsh environment of high temperature and high pressure such as oilfield application with the advantages of high sensitivity, quick response and good reliability. Experiments show the sensor system has a pressure accuracy of 0.05MPa over a range of 0.1MPa to 60MPa at room temperature and in the mixed environment of high temperature and high pressure, the pressure accuracy gets to 0.07MPa under the experimental condition.
A remote sensing image fusion technique provides a mechanism for integrating multiple remotely sensed images to form an innovative image by using a certain algorithm for improving the spatial quality of the source image with minimal spectral distortion. Many algorithms, known as pan-sharpening algorithms, have been developed to improve the spatial resolution of multispectral (MS) images with a panchromatic (Pan) image. In the standard fusion methods, high spectral quality implies low spatial quality and vice versa. The utility of one Pan-sharpening model based on the variational model (VM) that consists of several energy terms is tested on very high spatial resolution images. In this model, the geometric structure matching term is used to inject the geometric structure of the Pan image, and the spectral matching term is utilized for preserving the spectral information. To balance the tradeoff between injecting the spatial information and preserving the spectral information, a static and a dynamic weight paradigm were introduced in this paper to control their relative contributions (static weights VM and dynamic weights VM). The evaluation of the experimental results on the QuickBird and WorldView-2 datasets shows that VM-based fusion models are better than the principal component analysis, Brovey transform fusion model, and Wavelet fusion model, and the dynamic weights VM performs better than the static weights VM. VM-based fusion models could be good options for very high spatial resolution remote sensing image fusion.