MACCS is a Multi-Mission Atmospheric Correction and Cloud Screening software. This tool has been developed by CNES. It is based on a multi-temporal algorithm that makes an optimized use of image time series to characterize the atmosphere and detect clouds. We have generated level-2 Sentinel-2 products on various targets over Europe but also over deserts or urban areas with high aerosol optical thickness (AOT). The results are validated by comparison to in-situ measurements from AERONET for AOT and water vapor. We also directly validate ground reflectance using CNES Lacrau photometer. Then, the joint effort of CNES and DLR to merge their algorithms MACCS and ATCOR into a so-called MAJA processing chain will be detailed, together with the future development and validation plan. Finally, the sentinel-2 level-2 production plan will be presented in the context of THEIA land data center.
The presence of haze reduces the accuracy of optical data interpretation acquired from satellites. Medium and high spatial resolution multispectral data are often degraded by haze and haze detection and removal is still a challenging and important task. An empirical and automatic method for inhomogeneous haze removal is presented in this work. The dark object subtraction method is further developed to calculate a spatially varying haze thickness map. The subtraction of the haze thickness map from hazy images allows a spectrally consistent haze removal on calibrated and uncalibrated satellite multispectral data. The spectral consistency is evaluated using hazy and haze free remotely sensed medium resolution multispectral data.
Pan-sharpening of remote sensing multispectral imagery directly influences the accuracy of interpretation, classification, and other data mining methods. Different tasks of multispectral image analysis and processing require specific properties of input pan-sharpened multispectral data such as spectral and spatial consistency, complexity of the pan-sharpening method, and other properties. The quality of a pan-sharpened image is assessed using quantitative measures. Generally, the quantitative measures for pan-sharpening assessment are taken from other topics of image processing (e.g., image similarity indexes), but the applicability basis of these measures (i.e., whether a measure provides correct and undistorted assessment of pan-sharpened imagery) is not checked and proven. For example, should (or should not) a quantitative measure be used for pan-sharpening assessment is still an open research topic. Also, there is a chance that some measures can provide distorted results of the quality assessment and the suitability of these quantitative measures as well as the application for pan-sharpened imagery assessment is under question. The aim of the authors is to perform statistical analysis of widely employed measures for remote sensing imagery pan-sharpening assessment and to show which of the measures are the most suitable for use. To find and prove which measures are the most suitable, sets of multispectral images are processed by the general fusion framework method (GFF) with varying parameters. The GFF is a type of general image fusion method. Variation of the method parameter set values allows one to produce imagery data with predefined quality (i.e., spatial and spectral consistency) for further statistical analysis of the assessment measures. The use of several main multispectral sensors (Landsat 7ETM+, IKONOS, and WorldView-2) imagery allows one to assess and compare available quality assessment measures and illustrate which of them are most suitable for each satellite.
Information extraction from multi-sensor remote sensing imagery is an important and challenging task for many
applications such as urban area mapping and change detection. A special acquisition (orthogonal) geometry is of great
importance for optical and radar data fusion. This acquisition geometry allows to minimize displacement effects due
inaccuracy of Digital Elevation Model (DEM) used for data ortho-rectification and existence of unknown 3D structures
in a scene. Final data spatial alignment is performed by recently proposed co-registration method based on a Mutual
Information measure. For a combination of features originating from different sources, which are quite often noncommensurable,
we propose an information fusion framework called INFOFUSE consisting of three main processing
steps: feature fission (feature extraction aiming at complete description of a scene), unsupervised clustering (complexity
reduction and feature representation in a common dictionary) and supervised classification realized by Bayesian or
Neural networks. An example of urban area classification is presented for the orthogonal acquisition of space borne very
high resolution WorldView-2 and TerraSAR-X Spotlight imagery over Munich city, South Germany. Experimental
results confirm our approach and show a great potential also for other applications such as change detection.