Remote sensing images often contain a significant amount of clouds, which can result in substantial resource costs during transmission and storage. Cloud detection can reduce these costs. Although current cloud detection methods perform well in extracting large and thick clouds, there are still some issues, such as missed detection of small and thin clouds and false detection in non-cloud areas. Therefore, we propose a deep learning framework called DB-Net. It consists of three main modules: feature extraction module (FEM), cascaded feature enhancement module (CFEM), and feature fusion module (FFM). In the FEM, we leverage the advantages of both convolutional neural network and Transformer by utilizing two branches to reduce the loss of semantic information. To enhance the acquisition capability of multi-scale semantic information, in the CFEM, regular convolutions are replaced with deformable convolutions to adaptively capture cloud features of various sizes, and a cascaded structure is designed to enhance the interaction of information among different scales. Furthermore, to focus on small and thin cloud information and suppress non-cloud background information, we designed the FFM using attention mechanisms to enhance the target information in the features extracted by FEM and CFEM. Extensive experiments were conducted on the GF1-WHU dataset, and comparisons were made with mainstream cloud detection networks. The experimental results indicate that the proposed DB-Net method reduces cloud information omission, effectively focuses on thin clouds and small clouds, and improves overall cloud detection performance.
Most remote sensing sea ice classification methods use single-source remote sensing data, such as synthetic aperture radar (SAR) data and optical remote sensing data. SAR data contain rich sea ice texture information, but the data are relatively single, making it difficult to distinguish detailed sea ice categories. Optical data include abundant spatial-spectral information, but they are often affected by clouds, fog, and severe weather. Hence, given the limitations of single-source data, the remote sensing sea ice classification accuracy cannot be further improved. A sea ice classification method based on deep learning and multisource remote sensing data fusion is proposed utilizing an improved densely connected convolutional neural network (DenseNet) to mine and fuse the multilevel features of sea ice. According to the characteristics of SAR data and optical data, a dual-branch network structure based on an improved DenseNet is employed for feature extraction, and a squeeze-and-excitation attention mechanism is introduced to weight the fusion features to further enhance the feature weights that can effectively distinguish different types of sea ice. The fully connected network is used to perform the deep fusion of features and classify sea ice. To verify the effectiveness of the proposed method, two sets of sea ice data are utilized for sea ice classification. The experimental results show that the proposed method fully excavates and fuses the multilevel characteristics of heterogeneous data using the improved dual-branch network structure, leverages the complementary characteristics of SAR data and optical data, significantly increases the sea ice classification accuracy, and effectively improves the influence of cloud cover on the sea ice classification accuracy. Compared with the typical single-source data classification method and other heterogeneous data fusion classification methods, the proposed method achieves superior overall classification accuracy (98.49% and 98.58%).
Sea ice can cause some of the most prominent marine disasters in polar and high latitude regions, and remote sensing technology provides an important means to detect such hazards. The accuracy of sea ice detection depends on the number and quality of labeled samples, but because of environmental conditions in sea ice regions, acquisition of labeled samples can be time-consuming and labor-intensive. To solve this problem, we propose a combined active learning (AL) and semisupervised learning (SSL) classification framework for sea ice detection. At first, we acquire most informative and representative samples by AL; then labeled samples acquired by AL are used as the initial labeled samples for SSL, in this framework, we not only choose the most valuable samples but also use the large number of unlabeled samples to enhance the classification accuracy. In AL phase, we use two different sampling strategies: uncertainty and diversity. In the SSL phase, we utilize a sampling function integrating AL to acquire semilabeled samples, and we use a transductive support vector machine as a classification model. We analyze three remote sensing images (hyperspectral and multispectral) and conduct detailed comparative analyses between the proposed method and others. Our proposed method achieves the highest classification accuracies (89.9734%, 97.4919%, and 89.7166%) in both experiments. These results show that the proposed method exhibits better overall performance than other methods and can be effectively applied to sea ice detection using remote sensing.
European Galileo global navigation system’s four in-orbit validation (IOV) satellites (E11, E12, E19, and E20) are able to calculate position accurately. The analysis of the IOV satellites’ measurements can provide insight into the performance of the Galileo system. To evaluate the performance of IOV satellites using measurements in the Shanghai, China, area signal-to-noise ratio (SNR) and multipath are used. We also suggest a method to calculate the four frequencies’ multipath error. When compared with global positioning system (GPS) satellites’ SNR, IOV satellites’ signal strength is stronger. In the aspect of multipath error, the IOV satellite is also less than GPS. The accuracy of single point positioning under open sky, under trees, and between tall buildings of a combined GPS/Galileo system is analyzed in the Shanghai area. The positioning result shows that the positioning accuracy of the combined GPS/Galileo system is better than the GPS system alone.
Rapid damage assessment after an earthquake is vital for an efficient emergency response. With the rapid development of unmanned aerial vehicles (UAVs), they can now be used to rapidly assess the building damage and have the advantages of real-time operation, flexibility, low cost, etc. However, UAV images are “big data,” and UAVs can obtain hundreds of scene images in a short period of time. It is, therefore, important to speed up the processing time for UAV images. This paper proposes a parallel processing approach for accelerating the speed of automatic three-dimensional (3-D) building damage detection, using a preseismic digital topographical map and postseismic UAV images. From the experimental results obtained from 3-D building damage detection of the 2013 Ya’an earthquake in Baoxing County, Sichuan province of China, the comparison of the parallel processing in terms of digital surface model generation using postseismic UAV images shows that the total cost of the multicores central processing units (CPUs) and graphics processing unit-based implementation is about 11.0 times faster than the single-core CPU implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.