Chlorophyll-a (Chl-a) concentration estimation by remote sensing is an important means for monitoring offshore water quality and eutrophication. In-situ hyperspectral data can achieve accurate analyses of Chl-a, but it is not suitable for regional inversion. Satellite remote sensing provides the possibility for regional inversion, but the precision is lower limited to atmospheric correction result. Therefore, this work uses machine learning to fuse in-situ hyperspectral data and Sentinel-2 multispectral instrument images to combine their complementary advantages, so as to improve the precision of regional Chl-a concentration inversion. First, the in-situ spectra were resampled based on the satellite spectral response function to obtain equivalent reflectance. Second, the spectral feature bands of Chl-a were determined by correlation analysis. Then three machine learning models, support vector regression, random forest, and back propagation neural network, were used to establish mapping relationships of feature bands between equivalent reflectance and satellite image reflectance so as to correct the satellite feature bands. Finally, Chl-a inversion models were constructed based on the satellite feature bands before and after correction. The results demonstrate that the corrected inversion model shows an increase in R2 by 0.25 and a decrease in mean relative error by 7.6%. This fusion method effectively improves the accuracy of large-scale Chl-a concentration estimation.
The accuracy of geographic location is important for island investigations by remote sensing. However, many islands are
far away from land, and it is impossible to obtain accurate ground control points (GCPs) that could be used for
geometric correction. We propose a geometric correction method without using GCP to orientate islands accurately. The
test data are four SPOT-5 images that were obtained from the same orbit and at the same time; one of these images does
not include islands but allows one or more GCPs to be acquired. Firstly, we initially correct the image with GCPs by
using a physical model, metadata, and a digital elevation model derived from SRTM data, but the accuracy is slightly
better than 50 m. We calculate the offset between the corrected image and its GCPs and use this offset to correct the
digital elevation model to make its coordinates to agree with that from the metadata. Then, we further correct the image
by using a physical model, metadata and the corrected digital elevation model to suppress the hypsographical distortion.
Finally, We use an affine transformation model to calculate the distortion parameters from the corrected image by using
its GCPs, and further used these parameters to correct the other three images without GCPs. Our experiment is quite
encouraging as when some islands are 159 km away from land we still achieve a location accuracy better than 5 m.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.