Many applications of remote sensing - like, for example, urban monitoring - require high resolution image data for a correct determination of object geometry. The desired geometry of an object's surface is created in dieffernet studies by use of well known segmentation techniques. In this study, we evaluate the influence on image quality of analog and digital image data on the results of a image segmentation in eCognition. We compare the suitability of analog middle format camera data with image data produced by a commercial "of the shelf" digital camera taken during two campaigns in 2003 and 2004. Furthermore, the results of a multiresolution classification of an urban test site by use of both datasets will be presented. An outlook for future work on a multiresolution data fusion with hyperspectral data will be given at the end of this paper.
The motivation for data fusion is to reduce the limitations and uncertainties associated with data coming from a single sensor only. In the context of remotely sensed data the fusion is often performed by combining high spatial with high spectral resolution imagery at different levels. In contrast to pixel-based approaches like the IHS-transformation, in this paper we will focus on a fusion of data at the feature level.
In high spatial resolution data the geometry of urban objects can be determined very accurately. But high spatial resolution data often contains low spectral information such as, for example, a three band RGB image. Thus, similar feature values for thematic classes like water, dark pavements or dark rooftops lead to classification errors.
If hyperspectral data is used to classify urban materials, the definition of endmembers representing those materials is needed. The problem is that endmembers representing urban surface types are often the result of a mixture of spectral pure materials which leads to flat spectra. Consequently, those thematic endmembers can hardly be detected by standard algorithms like the Pixel Purity Index (PPI) so that standard classification procedures fail.
In order to improve the classification process, our approach fuses hyperspectral data recorded by the HyMap sensor with high spatial resolution imagery (digital orthophotos) for a combined endmember selection, classification, and structural analysis.
The endmember for the thematic classes will be determined in a semi-automatic process. After a segmentation of the high spatial resolution dataset the resulting segments will be used to detect those pixels in the hyperspectral data sets, which represent candidates for the definition of thematic endmembers. The endmembers are stored in a spectral library and are used for the classification of hyperspectral data.
The segments in the high spatial resolution data will be processed based upon the classification of the hyperspectral dataset and the application of overlay rules.
The motivation for data fusion is to reduce the limitations and uncertainties associated with single sensor data. In the context of remotely sensed data this is often performed by combining images of high spatial resolution with those of high spectral resolution at different levels. In contrast to pixel-based approaches like Intensity-Hue-Saturation (IHS) or Principal Component (PC) we will focus on image fusion at feature level. The research of this paper was conducted within within the "HyScan" project which goal is to develop a GIS based analysis and mapping of surface characteristics in urban areas using hyperspectral images in combination with remote sensing data of very high spatial resolution. In most cases the classification of hyperspectral data is performed using methods like Spectral Angle Mapper (SAM) or Mixture Tuned Matched Filtering (MTMF). Reference spectra for those algorithms are stored in libraries which contain the spectra of pure materials so called endmembers. The problem is that endmembers that represent urban surface types often display a mixture of spectral pure materials and thus show flat spectra. As a result, those thematic endmembers can hardly be detected by standard algorithms like the Pixel Purity Index (PPI). As a consequence standard classification procedures fail. In order to improve the quality of results, we fuse hyperspectral data recorded by the DAIS sensor with high spatial resolution imagery (e.g. HRSC) for a combined endmember selection, classification, and structural analysis. After segmentation of the high spatial resolution data, appropriate thematic classes are manually defined. The resulting segments are used to detect a set of pure pixels in the hyperspectral data which represent thematic endmembers. The segments resulting from the spatial high resolution data are processed using the endmember abundances of the hyperspectral data through a combined classification. Method and initial results of our fusion method are presented for endmember selection and classification of urban surface characteristics.