The mitigation of energy usage in urban areas, especially in buildings, has recently captured the attention of many city managers. Owing to the thermal images’ limited resolution, especially at the edges, creating a high-resolution (HR) surface model from them is a challenging process. This research proposes a two-phase strategy to generate an HR four-dimensional thermal surface model of building roofs. In the single-source modification phase, an enhanced thermal orthophoto is produced by retraining the enhanced deep residual super-resolution deep network, and then, using state-of-the-art structures from motion, semi-global matching, and space intersection. The final surface model’s resolution is raised by combining thermal data with visible unmanned aerial vehicle images to overcome the limitation of single-source methods in resolution increase. To this end, after visible orthophoto and digital surface model generation, buildings and their boundaries are extracted using the multi-feature semantic segmentation method. Next, in the multi-source modification phase, a fine-registered enhanced thermal orthophoto is generated, and thermal edges are identified around the boundary of the building. The visible and thermal boundaries are then matched, and any smoothness in the temperature edges is eliminated. The results show that the average difference in position between the thermal edges and building boundaries is reduced, and temperature smoothness is completely eliminated at the building edges.
Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral–spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.
Classification of hyperspectral remote sensing imagery is one of the most popular topics because of its intrinsic potential to gather spectral signatures of materials and provides distinct abilities to object detection and recognition. In the last decade, an enormous number of methods were suggested to classify hyperspectral remote sensing data using spectral features, though some are not using all information and lead to poor classification accuracy; on the other hand, the exploration of deep features is recently considered a lot and has turned into a research hot spot in the geoscience and remote sensing research community to enhance classification accuracy. A deep learning architecture is proposed to classify hyperspectral remote sensing imagery by joint utilization of spectral–spatial information. A stacked sparse autoencoder provides unsupervised feature learning to extract high-level feature representations of joint spectral–spatial information; then, a soft classifier is employed to train high-level features and to fine-tune the deep learning architecture. Comparative experiments are performed on two widely used hyperspectral remote sensing data (Salinas and PaviaU) and a coarse resolution hyperspectral data in the long-wave infrared range. The obtained results indicate the superiority of the proposed spectral–spatial deep learning architecture against the conventional classification methods.
Interest in the joint use of different data from multiple sensors has been increased for classification applications. This is because the fusion of different information can produce a better understanding of the observed site. In this field of study, the fusion of light detection and ranging (LIDAR) and passive optical remote sensing data for classification of land cover has attracted much attention. This paper addressed the use of a combination of hyperspectral (HS) and LIDAR data for land cover classification. HS images provide a detailed description of the spectral signatures of classes, whereas LIDAR data give detailed information about the height but no information for the spectral signatures. This paper presents a multiple fuzzy classifier system for fusion of HS and LIDAR data. The system is based on the fuzzy K-nearest neighbor (KNN) classification of two data sets after application of feature grouping on them. Then a fuzzy decision fusion method is applied to fuse the results of fuzzy KNN classifiers. An experiment was carried out on the classification of HS and LIDAR data from Houston, USA. The proposed fuzzy classifier ensemble system for HS and LIDAR data provide interesting conclusions on the effectiveness and potentials of the joint use of these two data. Fuzzy classifier fusion on these two data sets improves the classification results when compared with independent single fuzzy classifiers on each data set. The fuzzy proposed method represented the best accuracy with a gain in overall accuracy of 93%.
Hyperspectral data classification using supervised approaches, in general, and the statistical algorithms, in particular, need high quantity and quality training data. However, these limitations, and the high dimensionality of these data, are the most important problems for using the supervised algorithms. As a solution, unsupervised or clustering algorithms can be considered to overcome these problems. One of the emerging clustering algorithms that can be used for this purpose is the kernel-based fuzzy c-means (KFCM), which has been developed by kernelizing the FCM algorithm. Nevertheless, there are some parameters that affect the efficiency of KFCM clustering of hyperspectral data. These parameters include kernel parameters, initial cluster centers, and the number of spectral bands. To address these problems, two new algorithms are developed. In these algorithms, the particle swarm optimization method is employed to optimize the KFCM with respect to these parameters. The first algorithm is designed to optimize the KFCM with respect to kernel parameters and initial cluster centers, while the second one selects the optimum discriminative subset of bands and the former parameters as well. The evaluations of the results of experiments show that the proposed algorithms are more efficient than the standard k-means and FCM algorithms for clustering hyperspectral remotely sensed data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.