PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
SPIE publishes accepted journal articles as soon as they are approved for publication. Journal issues are considered In Progress until all articles for an issue have been published. Articles published ahead of the completed issue are fully citable.
Rapid damage assessment after disasters is crucial for humanitarian relief and emergency response. The abrupt and unpredictable nature of disasters causes variations in the time, location, and sensors used for image collection, resulting in significant data disparities in satellite imagery. This poses a significant challenge for assessment tasks. To enable a rapid response, training models from scratch using a sufficient amount of on-site data is impractical due to time constraints. Thus, in practical applications, a model with robust adaptability and generalization is essential for autonomously adjusting to data variations. However, the majority of current models are trained and tested on a singular dataset, neglecting the aforementioned issues. To address these challenges, this study created datasets from the Turkey earthquake with large and dense building features and introduced datasets from the Louisiana hurricane with small and sparse building features. These datasets exhibit significant style differences and cover a broader range of building characteristics, providing a comprehensive evaluation of the model’s adaptability. In the building damage assessment task, the model is trained on public datasets and validated using the newly introduced scenario data. Compared with existing assessment models, the U-net model demonstrates the highest adaptability performance in objectively evaluating damage levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Landslide identification is an important task in the field of geologic disaster monitoring and early warning, which is of great significance for improving social safety and mitigating the impact of disasters. With the development of computer vision, deep learning is widely used in landslide recognition research. We focus on segmenting landslides from high-resolution optical satellite images using convolutional neural network. Currently, deep learning semantic segmentation models still face issues such as neglecting small objects and incorrectly segmenting terrain features with similar shapes and pixel characteristics. Considering the unbalanced distribution of categories and large differences in scene styles during the extraction of key feature information from remote sensing images, landslides have diverse and complex backgrounds. We propose a fusion DeepLabv3+ and completed local binary pattern (CLBP) landslide image semantic segmentation method (CLBP-DeepLabv3+), using the improved inverted residual block as the core structure of backbone to extract different levels of image information, and after backbone extracts landslide image features, it connects the improved DenseASPP to fuse the different levels of features to better pay comprehensive attention to local and global features and obtain contextual information at different scales. Then, the texture and edge features of the image are extracted using CLBP, and the multi-level features are merged by introducing the feature aggregation module, which constitutes the CLBP-DeepLabv3+ model. Through ablation experiments and comparative tests on a self-made dataset, the experimental results show that the proposed method performs the best on the validation set, with a mean intersection over union (mIoU) of 88.62%, mean pixel accuracy (mPA) of 94.17%, recall rate of 90.17%, and an intersection over union (IoU) for landslides of 80.53%. Compared with the original DeepLabv3+ model, the improved DeepLabv3+ increased the mIoU by 3.15%, mPA by 3.99%, recall by 4.93%, and IoU by 4.97%. Compared with other semantic segmentation models, the improved DeepLabv3+ also achieved better segmentation accuracy in extracting landslide features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, algorithms for fast and accurate recognition and detection of circular markers have become crucial in the field of high-speed videogrammetry. However, most existing techniques often necessitate a manually selected region of interest that encompasses the full information of the circular marker. This manual box selection method is inefficient and unsuitable for practical engineering applications. To address this issue, we propose a global automatic recognition and detection approach that employs multi-level constraints for identifying circular markers in high-speed videogrammetry. First, an edge detection method based on the Canny algorithm is employed to extract candidate edges containing all circular markers. Subsequently, two geometric constraints—general geometric condition and roundness metric constraint—are applied to eliminate a large number of non-circular marker edges. Finally, pseudo-edges of circular markers are removed, and the corresponding accurate edges are retained by applying an extrema point condition constraint. The performance of the proposed method is evaluated using several high-speed videogrammetry image datasets. Experimental results demonstrated that our method can accurately detect and recognize all circular markers, outperforming comparable methods. The proposed method holds promise for efficient and wide-ranging applications in the recognition and detection of circular markers in high-speed videogrammetry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of artificial intelligence technology, deep learning has achieved significant advantages in synthetic aperture radar automatic target recognition (SAR-ATR). However, previous research showed that the addition of small perturbations not easily detected by the human eye can lead to SAR-ATR model recognition errors; that is, they are affected by adversarial attacks. To solve the problem of long computation time in existing SAR sparse adversarial attack algorithms, we propose a SAR fast sparse adversarial attack (FSAA) algorithm. First, an end-to-end sparse adversarial attack framework is developed based on the lightweight generator ResNet model using two different upsampling modules to control the amplitude and position of the adversarial perturbation. A loss function for the generator is then constructed, which mainly consists of the linear addition of the attack loss, the amplitude distortion loss, and the sparsity loss. Finally, the SAR image is mapped through the trained generator model in a one-step process to generate sparse adversarial perturbations quickly and effectively. Compared with the existing SAR sparse adversarial attack algorithm, the experimental results show that the generation speed of the proposed method is at least 30 times higher when the perturbation is less than 0.05% of the pixels in the entire image, and the recognition rate of the model is >13%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.