In recent years, remote sensing image observation technology has developed rapidly. Extracting coastlines from remote sensing images has become an indispensable means of port area measurement. Port segmentation from remote sensing images is an important method of coastline extraction and measurement. The remote sensing images used are panchromatic remote sensing images. Due to complex remote sensing image scenes and the large difference in feature information at different scales, traditional segmentation methods cannot perform effective extraction, and it is difficult to accurately segment the coastline in remote sensing images. We propose a multiscale feature fusion network for automatic port segmentation from remote sensing images. First, to reduce the redundant parameters and complex operation problems in traditional convolutional neural networks, we propose using MobileNetv2 as the base network for feature extraction to achieve a lightweight model. Then aiming at the feature differences of remote sensing images at different scales, we present atrous convolution as a convolution method for the entire network and combine a multiscale feature fusion method to extract the features of remote sensing images and improve the feature extraction ability. To reduce the problem of ships calling at the port being easily mistaken for port area and causing false segmentation, we propose a method of eliminating the ship area to reduce the interference. Finally, the comprehensive evaluation of a large number of Google port remote sensing data shows that compared with existing methods, the proposed method has the characteristics of being lightweight and having high precision. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 1 scholarly publication.
Image segmentation
Convolution
Remote sensing
Image fusion
Feature extraction
Data modeling
Image processing