Remote sensing imaging from satellites is widely applied to environmental monitoring, such as land cover analysis,1,2 weather analysis,3 and crop phenology monitoring.4 Among these applications, some of them require better spatial and temporal resolutions, such as disaster management, crop phenology monitoring, land cover change detection, and air quality monitoring.5 Recently, with urban development and industrialization, air pollution has become an important issue that endangers human health6–8 and has a negative effect on the earth energy budget. Thus, the monitoring of air pollutants is essential to understand and eliminate the effect and threat of suspended particles (atmospheric aerosols). Ground stations are widely used to monitor the air quality in particulate matter (PM) concentrations near the surface. Although they can examine air quality effectively, the spatial resolution of ground stations is usually too coarse to support local phenomena.9 With the advantage of wide coverage in regional or global scales, satellite observation becomes a better choice, with higher spatial information on air quality monitoring in aerosol optical properties.5,10,11 The aerosol optical depth (AOD) can be retrieved from satellite images to further estimate PM concentrations such as and .12,13
In general, the regional emission and transportation of pollutants result in the air quality change, both dynamically and locally. Thus, the spatial and temporal resolutions of a single polar-orbiting satellite are insufficient to examine the diurnal variation of air quality. As we all know, no single satellite can produce images with both high-spatial and high-temporal resolutions. Although WorldView-3 (0.3 to 1.3 m) and Landsat-8 (15 to 30 m) have relatively high-spatial resolutions, their temporal resolutions are low, which limits continuous monitoring. On the other hand, satellites with high-temporal resolution can be applied for time-critical applications, such as Brakenridge and Anderson14 applied terra/aqua moderate-resolution imaging spectroradiometer (MODIS, 1 to 2 days) images on disaster management, and Imai and Yoshida15 applied Himawari-8 Advanced Himawari Imager (AHI, 10 min) for weather monitoring. However, the coarse spatial resolutions cannot provide detailed information in the spatial sphere. Hence, image fusion techniques were proposed to generate high-spatial–temporal resolution imagery,16–20 such as the image-fusion model for enhancing the temporal resolution of Landsat-8 surface reflectance images using MODIS and further application to surface temperature monitoring.21,22
Himawari-8, the second generation Asian geostationary satellite launched in 2014, offering observations every 10 min, has been widely applied to the investigations of short-term phenomena in dynamic procedures even with its low spatial resolution.3,4,15 In addition to a high-temporal resolution, the new payload named the AHI equips with 16 bands within the visible and infrared spectra and facilitates detailed information on both terrestrial and atmospheric parameters. The similarity of the visible bands between Landsat-8 Operational Land Imager (OLI) and Himawari-8 AHI sensors, as shown in Table 1, offers a high potential to fuse the high-spatial resolution (30 m) with the ultrahigh-temporal resolution (10-min windows) imagery for the monitoring of short-term events. Therefore, this research aims at applying a spatial–temporal image fusion method on Landsat-8 and Himawari-8 images for the further application of air quality monitoring. To achieve this objective, there are two major challenges that need to be addressed.
The specifications of Himawari-8 AHI and Landsat-8 OLI.
Overall, three main objectives are proposed in this study. First, we aim at examining the feasibility of fusing Himawari-8 and Landsat-8 satellite images to produce sequential images with ultrahigh-temporal resolution (10-min windows) and high-spatial resolution (30 m). The second objective is to modify the spatial and temporal adaptive reflectance fusion model (STARFM) algorithm16 for preserving the atmospheric information after image fusion. Finally, the fused images will be applied to aerosol retrieval for monitoring the air quality/pollution in highly spatiotemporal resolutions. To verify the results of fused images in aerosol retrieval, the in situ measurements of aerosol property from the aerosol robotic network (AERONET) are compared.
For the rest of this paper, we first review the literature related to spatial and temporal image fusion methods in Sec. 2. Section 3 introduces the details of the proposed spatial and temporal adaptive reflectance fusion model-aerosol property (STARFM-AP) methods and workflow. In Sec. 4, we explain the evaluation results by comparing them with an observed/actual Landsat image and the AOD observations from an AERONET ground station. The overall result is discussed in Sec. 5, and the conclusions and future work are followed up in Sec. 6.
The function of the image fusion approach is to produce a better resolution in the spatial and temporal scales from multiple sources of satellite images. There are two general types of function. The first one is the spectral and spatial image fusion, which integrates multispectral images (e.g., RGB images) and panchromatic images to enhance the spatial resolution of multispectral images.23 The second type is the spatial and temporal image fusion, which aims at integrating high-spatial resolution imagery with high-temporal resolution imagery, such as STARFM.16
The main concept of STARFM is to estimate the new surface reflectance at a prediction date based on the reflectance differences between the Landsat and MODIS images at a reference date (i.e., the reference image). As a MODIS pixel may contain multiple land cover types (mixel), the STARFM predicts a new reflectance based only on the information from the neighborhood pixels with the same land cover type. Although the STARFM can be effective for regions that do not have significant land cover changes, the STARFM is less effective if the land cover changed. Thus, there are some efforts on fixing this issue, such as the spatial–temporal adaptive algorithm for mapping reflectance change (STAARCH)17 and the enhanced spatial and temporal adaptive reflectance fusion model.18 The idea of STAARCH is to find the changed area from reference image-pairs, and then, to use a series of MODIS images to identify the date of the change for the image fusion process. On the other hand, the ESARFM further assumes that the reflectance is linearly changed over time, thus, interpolating or extrapolating the variation of reflectance for fusing an image.
In addition to the STARFM community, another spatial and temporal image fusion group focuses on the spatial and temporal data fusion model (STDFM).19 The STDFM first collects the surface reflectance differences between fine-resolution and coarse-resolution images and then calculates the mean surface reflectance of each land cover type based on an assumed relationship between the reflectance of the coarse and fine images. However, as using the mean reflectance for pixels of each land cover type may induce some uncertainties, the enhanced spatial and temporal data fusion model (ESTDFM)20 applies a sliding window to calculate the mean surface reflectance of each land cover type inside that window. Therefore, the ESTDFM can adaptively adjust the reflectance of the fused image according to local information.
However, the existing methods are not suitable for the objective of AOD retrieval because of the following factors. First, the fused images should employ the TOA reflectance to keep atmospheric information, where all the existing methods can only fuse the surface reflectance as they would punish the pixels that have large reflectance changes by reducing their contribution to the fusion process. The reflectance changes can be caused by the atmosphere, making it important information that needs to be retained. Second, as the air quality usually has dynamic and nonlinear changes, even for methods that consider large reflectance changes such as the ESTDFM and the STAARCH, their assumptions of the reflectance change behavior are not suitable. Hence, this study aims at extending the STARFM method to solve the above-mentioned issues for the objective of aerosol retrieval.
Materials and Methodology
The satellite images and in situ measurements of AOD on the AERONET NCU_Taiwan site (24.968N, 121.188E) are collected for examining the proposed approach, including three dates of Landsat OLI, nine dates of Himawari-8 AHI and AOD measurements, as indicated in Table 2. The total number of tested images is 208. The reference images of the Landsat-8 OLI and Himawari-8 AHI are selected on November 16, 2015 at 1020 h. They are marked with the red rectangular boxes on the left-hand side in Fig. 5. The Landsat-8 OLI images on September 13, 2015 and October 15, 2015 are employed as the observed images to compare with the fused images of Himawari-8 AHI; the results are shown in Figs. 6 and 7, respectively. The AERONET is a global aerosol observation network of Cimel Sun photometers.24 Since AERONET can provide high-quality aerosol properties, the AOD measurement of AERONET becomes a standard reference for validation of aerosol retrievals and further application.25 Thus the AOD measurements of AERONET NCU_Taiwan site located in the study area are collected for the reference AOD in this study.
The study area and testing datasets.
In this study, there are three main stages proposed in the workflow including the preprocessing stage, the spatiotemporal image fusion stage, and the validation stage as shown in Fig. 2. Geometric registration, land cover type classification, and bidirectional reflectance correction of imagery data are implemented in the preprocessing stage. The second stage then performs the spatial and temporal image fusion process to predict the high-spatial resolution image at (i.e., synthetic image). Finally, the third stage retrieves AODs from the fused images and compares them with the in situ observations from the AERONET.
Preprocessing of Landsat-8 and Himawari-8 Data
The closest observation time between the Landsat-8 and Himawari-8 AHI images are first collected as the reference time (i.e., and ) for the fusing target of Himawari-8 images at the processing time at (i.e., ). The preprocessing stage contains the geometric registration of the images, the classification of the , and the surface bidirectional reflectance correction of the .
Before the fusion process, the images are first geographically registered so that the information on the images can match with each other. In this study, we manually select tie points on the images and then apply affine transformation for the image registration. This simplified approach is selected because polar-orbiting satellites usually has a small field of view, which results in linear geometric distortion behavior and can usually be corrected by applying affine transformation. In addition, this stage also performs image classification on the image to select spectrally similar pixels during the image fusion process. In this study, the -means algorithm is used for land cover type classification.
Moreover, as mentioned earlier, the Himawari-8 images are taken at different times during the daytime, which result in differences of the surface reflectivity. These differences will be mistaken as noise or land cover changes in the image fusion processing. Thus the bidirectional reflectance correction of the image is essential in the preprocessing stage. In this study, we apply diurnal histogram matching to register the surface reflectance of the image to that of the image.
Spatial–Temporal Image Fusion Stage
Two spatial and temporal image fusion methods are examined in this study. One is the original STARFM model16 and STARFM-AP proposed for atmospheric properties, as briefly introduced in Secs. 3.2.1 and 3.2.2, respectively.
Assume that the surface reflectance differences between the fine-resolution and coarse-resolution images at the reference time equal the differences at the prediction time . The surface reflectance of a specific pixel at can be estimated by1616 16 for more details on the STARFM model.
To keep the atmospheric information, the TOA reflectance from the coarse image (Himawari-8 image) is utilized instead of the surface reflectance to produce the fine-spatial resolution image at . However, if we directly apply the TOA reflectance from the Landsat and Himawari in the STARFM, the changes caused by the atmospheric effect would be treated as land cover changes or noise when generating the aforementioned weightings of fine-spatial resolution. Thus, the atmospheric calibration should be considered for the surface reflectance ideally. As the atmospheric correction procedure is usually time-consuming, it prevents us from fully utilizing the AHI 10-min resolution monitoring capability in particular. Therefore, a longer wavelength spectral band (which is much less influenced by the atmospheric scattering and absorption) is utilized for constructing the base of the surface reflectivity. According to the relationships between the visible spectral bands in surface reflectivity,26,27 the surface reflectivity of shorter wavelengths can be further obtained. Although the dark target approach is limited over bright surfaces for accurate AOD retrieval due to the variance in spectral reflectivity ratio between dark and bright surfaces, the linear relationship of surface reflectivity between visible and SWIR bands is evidenced in terms of land cover type based on the remotely sensed data after atmospheric correction.26 Furthermore, the surface signal used in this study is the relative intensity for the purpose of image fusion in weighting generation. Therefore, the linear relationship between spectral reflectivity is applied to all types of land cover in this study.
In this study, we first process the short-wave infrared (SWIR) spectral band of Landsat-8 OLI central at the spectrum of 1608.5 nm as the reference of surface reflectance after the systematic calibration with Himawari-8 AHI SWIR spectral band. Then the spatial distribution of SWIR surface reflectance is applied to generate the weightings for image fusion procedure with the diurnal variation from Himawari-8 AHI SWIR. The generated final weightings of SWIR associated with the TOA reflectance in the green band at () are applied to Eq. (2) to fuse the fine image in the green band which containing the atmospheric effect.
The workflow of proposed STARFM-AP approach is demonstrated in Fig. 3. The main difference between the STARFM and STARFM-AP is the method of surface reflectivity generated with and without SWIR spectral band. Taking the characteristic of the less scattering effect in longer wavelength spectral bands (SWIR), the atmospheric correction of the surface reflectivity in shorter wavelength spectral bands can not only be straightforward but also benefit further atmospheric applications of the fused image, such as the AOD retrieval introduced in the following section.
Diurnal AOD Retrieval and Evaluation
There are generally two kinds of approaches for the AOD retrieval from satellite images, dark target approach28 and contrast reduction method.29 Considering the blurring effect of the scattering and absorption of atmospheric aerosols (see also the scheme in Fig. 4), contrast reduction method is employed to retrieve the diurnal AOD with the Himawari-8 AHI fused image in this study. It is worthy to notice that the contrast between the bright and dark pixels can be enhanced with the higher spatial resolution after image fusion, which is the main reason that the contrast reduction method is favored on AOD retrieval in this study.
Based on the radiative transfer equation, the satellite observation in Eq. (3) can be further derived as Eq. (4)30,31 under the assumption of the surface reflectance and atmospheric transmittance did not change at time and
To evaluate the results of the image fusion from the STARFM and STARFM-AP procedures, the direct comparisons between the fused images with the observed images (Landsat OLI) are conducted. In terms of comparing the TOA reflectance differences, the results in preserving atmospheric properties can be identified. For the assessment of further application to the retrieval of diurnal AODs, the coincident measurements of AOD observed from AERONET are also compared.
Results of STARFM and STARFM-AP
Figure 5 shows the preliminary results of the fused and original images over the Zhongli District subset at 1100, 1200, 1300, and 1400 h (local time) on November 16, 2015, respectively. It looks as though the differences of the fused images between the STARFM and STARFM-AP procedures are not obvious from this scale of view. Thus visual comparison from a more zoom-in scale and quantified analyses are carried out in the following section.
Validation of Fused Images
TOA reflectance comparisons
The cloud-free and clear sky image datasets acquired on September 13, 2015 were examined first. In terms of comparing the discrepancy in the TOA reflectance, the efficiency in preserving the atmospheric properties can be assessed as well. The ground measurements of the AOD from the AERONET NCU_Taiwan site on September 13, 2015 is 0.101 (clear like) in the green band. The fusion results from the STARFM and STARFM-AP models are shown in Fig. 6, where the reference images are on November 16, 2015. In comparison with the observed image of Landsat-8 OLI on September 13, 2015 [Fig. 6(a)], both the STARFM and STARFM-AP can provide satisfying fused images where the average absolute differences are 0.1522 [Fig. 6(b)] and 0.1446 [Fig. 6(c)] in reflectance, respectively. Similar findings were reported by Gao et al.16
The scatter plots of the TOA reflectance in comparing the fused images between STARFM and STARFM-AP with the observed image are also displayed in Fig. 7 from the results on October 15, 2015. The reference images are the same as the previous cases (i.e., on November 16, 2015). The obvious difference between the STARFM and STARFM-AP fused images happens at around the value 1600 of the TOA reflectance [Fig. 7(b)], where the STARFM images show some systematic error [Fig. 7(a)]. As the STARFM directly applies the green band (more scattering effect than SWIR), the atmospheric effect could reduce down the surface reflectance with regards to the situation of the land cover change. Thus the image fusion process would tend to refer to the reflectance of the neighborhood pixels that have higher reflectivity. As a result, the STARFM fused images show some homogeneous patterns [horizontal strips in Fig. 7(a)] in the local regions. Furthermore, the regression of scatter plots in slope and offset also indicate the better results of STARFM-AP (0.87 and 267.61) than the STARFM (0.82 and 339.08) in producing high-spatial resolution TOA reflectance, which would benefit the AOD retrieval.
Evaluation of AOD retrieval
In this study, the contract reduction method in terms of dispersion coefficient (see also Sec. 3.3) is applied to retrieve the AOD from the Himawari-8 images and the fused images produced in this study from 8 am to 2 pm in 10-min steps on seven dates (October 25, 2015, October 27, 2015, November 15, 2015, November 16, 2015, November 27, 2015, December 13, 2015, and December 19, 2015). The green band was employed in the comparison due to its better sensitivity to aerosol effects. Theoretically, the window size is one of the variables in AOD calculation using the contrast reduction method [Eq. (4)], which is related to the spatial resolution of satellite image, reflectance of land cover type within the window, and the atmospheric condition. The optimal size of the window for AOD calculation might be different, especially for the different dataset. For fair comparison, the performance on each date was examined with window size from to since the optimal window size for the datasets on each date could be different. In addition, to avoid the possible influence of environmental noises, we filtered out the maximum 20% and minimum 20% of the reflectance32 for the AOD retrieval and included it in the comparison (denoted by “” in the figures) as well.
The results of the AOD retrieval from Himawari-8 images before and after fusion on each date are displayed in Fig. 8 for the comparisons. The in situ measurements of AOD from AERONET site are also included as the reference. For the effect of image fusion on AOD retrieval, the performance of STARFM (green circles) and STARFM-AP (blue triangles) are generally much better and stable than those of the original Himawari-8 data (black squares) after comparing with AOD measurements of AERONET on NCU_Taiwan site (red diamonds). The results indicate that the contrast between TOA reflectance can be potentially intensified with higher spatial resolution image for the AOD retrieval based on blurring effect. As the special case with heavier aerosol loading on October 27, 2015 (0.4 to 0.9 in AOD), the original Himawari-8 AHI images won with the best performance as shown in Fig. 8(b). The possible reason is that the AOD observed by an original Himawari-8 pixel will cover a lot of fine pixels after image fusion, which “dilutes” the atmospheric effect when retrieving a local region.
For the effect of atmospheric scattering and absorption (extinction) preservation on AOD retrieval, most of the cases suggest the results produced by STARFM-AP (blue triangles) compared to the counterparts of STARFM (green circles), although the root-mean-square error (RMSE) is similar. In addition to the magnitude, the stability of AOD retrieved from the proposed STARFM-AP approach (20.7% in relative error) is also more stable than STARFM (24.4%), such as the case on November 15, 2015 [Fig. 8(c)].
To provide a quantitative analysis, the AODs retrieved from the Himawari-8, STARFM, and STARFM-AP images were compared with AERONET measurements to calculate the RMSEs and relative errors, which are shown in Table 3. The gray region indicates that those time periods have cloud and shadow effects (visually identified), which were excluded in the statistical analyses. The averaged relative error of retrieved AODs from original Himawari-8 data, STARFM, and STARFM-AP fused images compared to in situ measurements of AERONET site are 43.7%, 24.4%, and 20.7%, respectively, as Table 3 summarized. Overall, the better performance of the proposed STARFM-AP in retrieving AODs is indicated in most of the cases, implying that the procedure of atmospheric effect preservation in STARFM-AP for retrieving the AOD in a high-spatial resolution is recommended.
The comparisons of the AOD retrieved from the Himawari-8 AHI image and the STARFM and STARFM-AP fused image in RMSE and relative error.
Note: Bold numbers indicate the best results in AOD retrievals.
TOA Reflectance Fusion with SWIR Spectrum
Surface bidirectional reflectance distribution and atmospheric effect of extinction are the primary factors related to TOA reflectance observed from a satellite sensor. Bidirectional reflectance can be taken with the empirical reflectivity in diurnal from a geostationary satellite, whereas the atmospheric effect is mainly fluctuated by the scattering and absorption (extinction) of atmospheric aerosols in visible spectral bands. Instead of the visible band itself, the SWIR spectrum insensitive to atmospheric extinction is proposed to eliminate the atmospheric effect on surface reflectance for image fusion (green band in this study). The better results based on SWIR surface reflectance of image fusion in TOA reflectance [Fig. 7(b)] are evident compared to those of green bands [Fig. 7(a)], either in the scale or offset. In addition, the temporal variance in atmospheric effect is much reduced with the SWIR spectrum for the image fusion in time domain.
Improvement in AOD Retrieval
The characteristics of air pollutants/aerosols in spatial and temporal distribution are obvious, such as urban and rural areas associated with normal and heavy traffic periods. To offer sufficient data, it thus becomes essential to monitor the air quality. Taking the advantages of geostationary- and polar-orbiting satellites in temporal and spatial resolutions, the high-spatiotemporal images are produced with atmospheric information contained in the visible band for aerosol retrieval. Based on the concept of contrast reduction between bright and dark objects caused from atmospheric extinction, the optical depth of suspended aerosols can be derived. Theoretically, the stronger signal in contrast from a higher spatial resolution image facilitates a more accurate AOD retrieval, see also the results before (Himawari-8) and after (STARMF/STARMF-AP) fusion in Table 3. On the other hand, the atmospheric effect/information can be appropriately retained within fused images of the visible band based on SWIR TOA reflectance. The further examinations of fused images in aerosol AOD retrieval with SWIR reflectance exhibit more accurate AOD retrievals (STARFM-AP). The results in Table 3 also confirm the advantages of using SWIR in preserving atmospheric information in visible bands and reducing the fluctuation of atmospheric effect in time domain for the further applications of fused visible images.
Conclusions and Future Work
Considering the bidirectional reflectance and atmospheric effect, the procedure of the image fusion has been examined with polar and geostationary satellites for high-spatial and temporal AOD retrieval in this study. The high-spatiotemporal resolution image of TOA reflectance is successfully fused with the proposed STARFM-AP approach from Himawari-8 AHI and Landsat-8 OLI images. The results of the case study and validations demonstrate improvements in both TOA reflectance fusion and AOD retrieval. The achievements of this study can be summarized as the integration of radiative characteristics in visible and SWIR spectra to (1) prevent atmospheric effect on surface reflectance for image fusion of TOA reflectance and (2) preserve atmospheric information in fused TOA reflectance for the improvement in AOD retrieval. Therefore, the potential contribution to providing high-spatiotemporal AOD distribution can be expected in monitoring not only diurnal air quality but also the source regions and pathways of pollutants, with the fused images from sequential observations of geostationary satellite.
Nevertheless, we find that the fused images could have a larger reflectance than the actual Landsat OLI images, which could be caused by the blurring effect (aerosol loading). Therefore, to understand the blurring effect in the different spatial resolutions related to optimal window size for AOD retrieval, and include that in the STARFM-AP, should be the future work of this study.
The authors deeply appreciate United States Geological Survey (USGS) and Japan Meteorological Agency (JMA) for providing Landsat and Himawari-8 satellite, and the in situ measurement of AOD from AERONET supported by the National Aeronautics and Space Administration. They are also very grateful to the editor and reviewers for their careful and constructive comments, which have improved the manuscript substantially. The authors would also like to express appreciation to the editorial board and editorial staff. Sincerest thanks for all the expertise and hard work invested in this paper.
https://doi.org/10.1007/s11356-018-2290-x ESPLEC 0944-1344 Google Scholar
Chih-Yuan Huang received his BS and MS degrees in civil engineering and geoinformation engineering from the National Central University in 2007 and 2008, respectively, and his PhD in geomatics engineering from the University of Calgary in 2014. He is an assistant professor at the Center for Space and Remote Sensing Research, National Central University. He has been the author of more than 10 journal papers since 2013 and has written three book chapters. His current research interests include GIS, geospatial cyber infrastructure, sensor web, and internet of things.
Hsuan-Chi Ho was a graduate student in the Civil Engineering Department, National Central University. He received his BS degree from the Civil Engineering Department, National United University in 2015, and his MS degree from the Civil Engineering Department, National Central University in 2018. His research mainly focuses on spatiotemporal remote sensing image fusion.
Tang-Huang Lin received his PhD from the Institute of Space Science, National Central University, Taoyuan, Taiwan, in 2001. He is a professor with the National Central University, since 2006. His research focuses on applying satellite data to retrieve aerosol optical properties for the monitoring of air pollution and atmospheric environment related to the issues of public health, global warming and climate change.