29 September 2014 Synergistic use of Landsat TM and SPOT5 imagery for object-based forest classification
Author Affiliations +
J. of Applied Remote Sensing, 8(1), 083550 (2014). doi:10.1117/1.JRS.8.083550
Abstract
This study evaluated the synergistic use of Landsat5 TM and SPOT5 images for improving forest classification using an object-based image analysis approach. Three image segmentation schemes were examined: (1) segmentation based on both SPOT5 and Landsat5 TM; (2) segmentation based solely on SPOT5; and (3) segmentation based solely on Landsat5 TM. The optimal scale parameters based on TM/SPOT5 and SPOT5 were determined by measuring the topological similarity between segmented objects and reference objects at 10 different scales. Mean and standard deviation of the pixels within each object in each input layer were the classification metrics. Nearest neighbor classifier was performed for the three segmentation schemes. The results showed that (1) the optimal scales of TM/SPOT5, SPOT5, and TM were 70, 100, and 0.8, respectively and (2) classification results with medium spatial resolution images were not desirable, with overall accuracy of only 72.35%, while synergistic use of Landsat5 TM and SPOT5 greatly improved forest classification accuracy, with overall accuracy of 82.94%.
Sun, Du, Han, Zhou, Lu, Ge, Xu, and Liu: Synergistic use of Landsat TM and SPOT5 imagery for object-based forest classification

1.

Introduction

Object-based image classification is carried out on the premise that adjacent pixels with similar spectral responses are aggregated into an object which segments the image into nonoverlapping regions. The processing units in an object-based approach are objects rather than pixels in the conventional pixel-based classification approach. An object-based classification approach is applied, not only relying on the spectral information of remotely sensed data but also making full use of the spatial information, including geometry, texture, and some topographic factors such as slope, aspect, and elevation.1 This technique can reduce the “salt and pepper” effect caused by variation of the spectral responses in the same entity,2,3 especially for very high-spatial resolution (VHR) imagery, in which the same entity is usually represented by pixels with high spectral heterogeneity. Previous research has demonstrated that better results can be achieved using object-based classification than pixel-based classification,45.6 which is widely applied in a complex forest ecosystem for species classification and information extraction.78.9.10.11

With the increased availability of satellite sensor images and the development of remote sensing technology, a combination of multisource remote sensing images with different temporal, spectral, and spatial resolutions can combine the advantages of each, and has an overwhelming advantage in the extraction of forest inventory information and quantitative estimation.12,13 The conventional method in forest information extraction using combined multisource data is based on the scale transformation (including scale up and scale down),14 which focuses on the process of consistency in spatial resolution, and largely causes information loss and structural damage. In comparison with the conventional scale transformation, a hierarchical network of image objects can be constructed through multiscale segmentation in an object-based approach15 which is capable of integrating multiresolution datasets in the classification process. For example, Ke et al.1 evaluated the synergistic use of QuickBird multispectral image and LIDAR data for forest species classification using an object-based approach with the highest accuracy reaching 91.6%.

Classification accuracy is influenced by the segmentation accuracy; thus, selection of the optimal scale parameter in multiscale image segmentation is crucial for the integrated use of multiresolution datasets in the object-based approach. Identifying the optimal segmentation scale parameters mainly involves four methods: (1) visual inspection. Optimal parameters are selected by comparing the segmentation results through many experiments based on visual inspection.1617.18.19.20 This method is relatively simple, but lacks quantitative criteria. (2) Statistical parameter method. Standard deviation of brightness means (SDOM) and the mean of standard deviations (MOSD) are first calculated. The minimum MOSD and maximum SDOM of objects’ brightnesses are used as the main evaluation index to determine the best segmentation scale parameters. However, using this method can only obtain optimal segmentation scales covering a certain range. Lian and Chen21 used this method to demonstrate that ASTER data were segmented using the scale of 10–30, while SPOT with a 10-m spatial resolution data were 30–40, SPOT with a 2.5-m spatial resolution data were 30–50, and QuickBird was 60. Verification showed that the results generated from the optimal scale were mostly consistent with the actual surface entities. (3) The method of measuring the topological similarity between segmented objects and reference objects.1,22 The segmentation quality is evaluated by comparing the segmented objects to the reference objects. Moller et al.22 and Ke et al.1 used the relative position (ratio of the distance between the center of a reference object and the center of an overlapped region of interest to the maximum distance between the center of the reference object to the most distant overlapped regions) or the absolute distance (the difference between the center of the overlapped region of interest and the center of the reference object) to evaluate the topological similarity between segmented and reference objects, and then determined the optimal segmentation scale. Ke et al.1 measured the topological similarity between segmented and reference objects to choose the optimal scale for both QuickBird images and LIDAR data. Then classification was conducted at the optimal scale with the Kappa reaching 91.6% for the classification accuracy. (4) Other methods such as detecting the relationship between the internal homogeneity in the object and the heterogeneity among objects, shape characteristics of segmentation objects, relative error of area, relative error of perimeter, and so on.23,24

In this research, we investigated the synergistic use of Landsat5 TM and SPOT5 data (labeled as TM/SPOT5) for improving forest species classification accuracies. The objectives of this paper were (1) to investigate optimum scale parameters for image segmentation based on both Landsat5 TM and SPOT5 (labeled as TM/SPOT5), based solely on SPOT5 imagery (labelled as SPOT5) and based solely on Landsat5 TM imagery (labelled as TM) and (2) to analyze and evaluate forest classification accuracy.

2.

Study area and Dataset

2.1.

Study Area

The study area (Fig. 1) is Shanchuan town, which is located in the south of Anji County (119°14’-119°53′E and 30°23’-30°53’N) in Zhejiang Province, China. Anji County has been very famous for the Moso bamboo forest due to its large area of distribution and its important role in supporting the local economy. The study area, Shanchuan town, covers an area of 46.72km2, 88.8% of which is covered by forests. The local climate is characterized as a subtropical oceanic climate with a yearly precipitation of 1400 mm and a mean temperature of 15.6°C. The land use/land cover includes Moso bamboo, broadleaf, conifer, residential areas, bare land, and water. Here, we labeled residential areas and bare land as nonforest.

Fig. 1

Images of study area.

JARS_8_1_083550_f001.png

2.2.

Data Collection and Preprocessing

TM imagery was acquired on July 5, 2008, with a spatial resolution of 30 m, and SPOT5 imagery was acquired on April 22, 2012, and consisted of a panchromatic band with a spatial resolution of 2.5 m and four multispectral bands with a spatial resolution of 10 m. Although the TM and SPOT5 images were acquired in different years, the land-cover types of the study area are stable during this period. Therefore, imageries from the two years did not influence the research of the methodology. A forest map with a scale of 1:10,000 was used for the validation of segmentation and classification. A field survey of the land cover/land use in the accessible regions was carried out in May 2011. A total of 67 sample sites were located using GPS measurements. The survey enabled us to capture the composition and the structure of the land cover categories. A forest map of the study area was depicted through manual interpretation of aerial photographs of very high spatial resolution by the Forest Resources Monitoring Center of Zhejiang Province and the Anji County Forestry Bureau in June 2008.

Each of the three datasets was geometrically corrected using a quadratic polynomial model based on ground control points extracted from the topographic map (1:50,000). Multispectral and panchromatic imageries were fused using the IHS transformation method to generate a new dataset with a spatial resolution of 2.5 m. Since the SPOT5 image had no blue band with which to display the true color image, the true color image was simulated by considering band1 as the blue band, (band1×3+band3)/band4 as the green band, and band2 as the red band.24 TM imagery with a resolution of 30 m and the fused SPOT5 imagery with a resolution of 2.5 m were used in the experiment. The forest map of Shanchuan town was processed into a vector map with attribute properties including the type of forest, the forest compartment number, the subcompartment number, the areas of dominant trees, and others. Ten subcompartments of three forest types extracted from the vector map were used as reference objects for the assessment of the segmentation results.

3.

Methods

Three schemes of multiscale segmentation were examined: segmentation based on both TM and SPOT5 images, segmentation based solely on SPOT5, and segmentation based solely on TM. For the former two schemes, different scale parameters were tested and the corresponding segmentation results were compared to determine the optimal scale by measuring the topological similarity between the segmented and reference objects. For TM, the optimal scale was identified by visual inspection because it was very easy to visually distinguish each TM segmentation result. Then the optimal scales for the three schemes were selected and the consequent classification framework was implemented at the optimal scale.

3.1.

Image Segmentation

The segmentation algorithm used in the study followed the fractal net evolution approach,25 which is embedded in the eCognition 8.7.1 software.26 It is a bottom-up region merging technique, where each pixel is treated as an initially separate segment. The smaller segments are then merged pairwise into larger ones if the increase in heterogeneity of the new segment compared to its component segments is less than a user-defined scale parameter. The scale parameter is a criterion used to specify the maximum increase allowed in the heterogeneity to control the merging process. The scale parameter is critical for the segmentation result, which is directly related to the size of the resultant objects, with a larger scale parameter resulting in larger objects.27 Image segmentation results also depend on other parameters, including weight of layers, weight of color/shape (1-color), and weight of smoothness/compactness. The weight of layers determines the contribution of the layer to the image segmentation. The weight parameter of color balances the spectral and shape heterogeneity and the shape factor involves the weight parameters of compactness and smoothness. This research examined the parameters for TM/SPOT5-based and SPOT5-based segmentations in Table 1. The weights of input image layers were defined to obtain more homogeneous objects, their color parameters were used due to the low ratio of the shape factor, the smoothness parameters were defined to achieve the optimal scale parameters, and the scales were divided into 13, ranging from 10 to 130 at an interval of 10. The segmentation parameters from the TM image were described as follows: color/shape parameter=0.9/0.1 and compactness/smoothness parameter=0.2/0.8.

Table 1

Parameters for TM/SPOT5 image-based and SPOT5 image-based segmentations.

Segmentation schemesData layersWeightScale parametersColorSmoothness
TM/SPOT5Red (TM)110–1300.90.2
Green (TM)1
Blue (TM)1
NIR (TM)1
Short infrared (TM)1
Short infrared (TM)1
Red (SPOT5)1
Green (SPOT5)1
Blue (SPOT5)1
SPOT5Red110–1300.90.2
Green1
Blue1

3.2.

Selection of Optimal Segmentation Scales

The method of measuring the topological similarity between segmented and reference objects was applied to select the optimal scales for the TM/SPOT5 and the SPOT5 segmentations. Visual interpretation was used to choose the optimal scale for TM segmentation because the segmentation results of TM were easy to judge by visual inspection. Objects intersecting the reference objects by over 10% of the object areas were considered as segmented objects of interest. Overlapped regions between reference and segmented objects were extracted. Two metrics of topological similarity for optimal scales’ parameters were calculated with Eqs. (1) and (2):1

(1)

RAor%=1ni=1nA0(i)Ar×100,

(2)

RAos%=1ni=1nA0(i)As(i)×100,
where RAor defines the relative area of an overlapped region to a reference object; RAos defines the relative area of an overlapped region to a segmented object; n represents the number of segmented objects of interest, A0(i) is the area of the i’th overlapped region associated with a reference object, Ar is the sum of the area of reference objects, and As(i) is the area of the i’th segmented object. Reference and segmentation objects in the subset of the SPOT5 image at a specific scale were shown in Fig. 2. Objects created by image segmentation contain characteristics of the original image and provide the spectral and spatial attributes and spatial topological relationships for the spatial analysis.28

Fig. 2

Image objects segmented at the scale parameter of 50 and the corresponding attribute tables.

JARS_8_1_083550_f002.png

3.3.

Classification and Accuracy Assessment

A nearest neighbor classifier was used for the three segmentation schemes to classify the images into five categories, including Moso bamboo, broadleaf, conifer, nonforest, and water. The object-based features used in the classification were the mean and standard deviations of the pixels within each image object in all the input layers. Nine input layers based on TM/SPOT5, three based on SPOT5, and six based on TM were used (Table 1). Therefore, the number of object features used in the classification for the three schemes was 18, 6, and 12, respectively. According to the measuring grid (Fig. 3), sampling points at an interval of 500 m were used for validation in the accuracy assessment.29,30 Confusion matrices for the accuracy assessment of the results were conducted based on the 170 sampling points of validation for TM/SPOT5, SPOT5, and TM classifications. Visual interpretation of the 170 sampling points was implemented for accuracy assessment. This was based on 67 sample sites through a field survey which enabled us to obtain knowledge of the land cover types and the referenced forest map.

Fig. 3

Sampling point for accuracy assessment of classification results.

JARS_8_1_083550_f003.png

4.

Results

4.1.

Optimal Scale

The results showed that the optimal scale parameter for the segmentation of TM/SPOT5 was 70 [Fig. 4(a)]. RAor and RAos curves intersected at scale parameters approaching 70. RAor increased with a decrease in RAos from the scale parameters from 10 to 130 as a whole. This illustrated the similar patterns of RAor and RAos values and produced the best segmentation results. The segmentation results were shown in Fig. 5(a).

Fig. 4

Segmentation quality evaluation using relative area of overlapped region to reference objects (RAor) and relative area of overlapped region to segmented objects (RAos): (a) TM/SPOT5 image-based segmentation and (b) SPOT5 image-based segmentation.

JARS_8_1_083550_f004.png

Fig. 5

Segmentation based on TM/SPOT5 and SPOT5 image: (a) TM/SPOT5-based segmentation at scale parameter 70 and (b) SPOT5-based segmentation at scale parameter 100.

JARS_8_1_083550_f005.png

The optimal scale parameter of 100 for SPOT5 segmentation was selected [Fig. 4(b)]. Similar to Fig. 4(a), the RAor and RAos curves intersected at scale parameters of approximately 100, indicating that the best segmentation results were achieved at this scale. The segmentation results were shown in Fig. 5(b).

Based on visual interpretation of image segmentation results through experiments, the optimal scale parameter of 0.8 for TM image segmentation was identified. As illustrated in Fig. 6, when a scale parameter was set to 10, an object contained several kinds of land cover types, showing poor internal homogeneity. However, when the scale decreased, objects gradually became homogeneous.

Fig. 6

Segmentation based on TM image: (a) TM-based segmentation at scale parameter 0.3, (b) TM-based segmentation at scale parameter 0.8, (c) TM-based segmentation at scale parameter 1, (d) TM-based segmentation at scale parameter 10.

JARS_8_1_083550_f006.png

4.2.

Forest Classification Results

The classification results of TM/SPOT5, SPOT5, and TM (Fig. 7) show very different spatial patterns. Confusion matrices were built for the accuracy assessment of the classification results (see Tables 2Table 3 to 4). Classification based solely on TM resulted in a lower accuracy than the other classifications for each of the three segmentation schemes. The overall accuracy and kappa value obtained from TM were 72.35% and 0.5928; SPOT5-based segmentation produced a slightly higher accuracy, with an overall accuracy of 78.82% and a kappa value of 0.6818; the highest overall accuracy and kappa value for SPOT5/TM-based segmentation were 82.94% and 0.7509. For bamboo forest identification, the highest accuracy was observed based on TM/SPOT5 classification, with the user’s accuracy of 84.72% and producer’s accuracy of 87.17%, followed by SPOT5 classification, which was lower, and TM which was the lowest. Compared with SPOT5 and TM, the user’s accuracy of the bamboo forest using TM/SPOT5 was increased by over 5%; the user’s accuracy of the broadleaved forest using TM/SPOT5 was also the highest with 90.20%. However, the user’s accuracy of the coniferous forest based on SPOT5 was the highest, with TM/SPOT5 lower, and TM as the lowest. For Moso bamboo, the producer’s accuracy of the TM/SPOT5 classification was 20% higher than that of TM. It can be seen from the results that the integration of TM and SPOT5 using an object-based classification method can improve the forest classification accuracy to some extent. The bamboo forest classification accuracy was significantly improved in this study.

Fig. 7

The classification maps based on object: (a) TM/SPOT5, (b) SPOT5, (c) TM.

JARS_8_1_083550_f007.png

Table 2

Confusion matrix for TM/SPOT5 image-based classification at scale parameter 70.

Classified dataReference data
Moso bambooBroadleafConiferWaterNonforestTotalUA
Moso bamboo61101007284.72%
Broadleaf1464005190.20%
Conifer4519002867.86%
Water000000
Nonforest4000151978.95%
Total706124015170
PA87.14%75.41%79.17%100%
AccuracyOverall accuracy=82.94%kappa=0.7509

Note: UA: User’s accuracy, PA: Producer’s accuracy. The same below.

Table 3

Confusion matrix for SPOT5 image-based classification at scale parameter 100

Classified dataReference data
Moso bambooBroadleafConiferWaterNonforestTotalUA
Moso bamboo6083057678.95%
Broadleaf8486006277.42%
Conifer0.317002085%
Water000000
Nonforest200191275%
Total705926114170
PA85.71%81.36%65.38%064.29%
AccuracyOverall accuracy=78.82%kappa=0.6818

Table 4

Confusion matrix for TM image-based classification at scale parameter 0.8.

Classified dataReference data
Moso bambooBroadleafConiferWaterNonforestTotalUA
Moso bamboo4074035474.07%
Broadleaf13576007675%
Conifer6113002065%
Water0100010
Nonforest4110131968.42%
Total636724016170
PA63.49%85.07%54.17%81.25%
AccuracyOverall accuracy=72.35%kappa=0.5928

5.

Discussion

As was illustrated in Fig. 4(a), the optimal scales detected based on TM/SPOT5 were between the scale parameters of 60 and 70. As was shown in Fig. 4(b), the optimal scales detected based on SPOT5 were between the scale parameters of 100 and 110. The differences among reference objects (red polygon), segmentation objects (black polygon), and the intersected polygon (cyan polygon) and segmentation objects, at scale parameters of 60 and 70 based on TM/SPOT5 and at the scale parameters of 100 and 110 based on SPOT5 were difficult to detect through visual inspection when compared with Figs. 8 and 5. But there were certain differences in the number of interesting segmentation objects between the scale parameters of 60 and 70 based on TM/SPOT5 and between the scale parameters of 100 and 110 based on SPOT5 (see Table 5).

Fig. 8

Segmentation based on TM/SPOT5 and SPOT5 image: (a) TM/SPOT5-based segmentation at scale parameter 60 and (b) SPOT5-based segmentation at scale parameter 110.

JARS_8_1_083550_f008.png

Table 5

Number of segmented objects of interest for TM/SPOT5 and SPOT5.

Scale parameterNumber of segmented objects of interest
TM/SPOT5SPOT5
10510979
20171311
3085175
4055107
503967
602945
702241
801738
901431
1001429
1101426
1201424
1301423

Table 5 indicated that the number of segmented objects of interest decreased with the increase of the scale parameters. Generally, the low RAor values, high RAos values and a large number of interesting objects demonstrated oversegmentations at smaller scales; otherwise, larger scales led to undersegmentations. In this study, we adopted two metrics (RAor and RAos) of topological similarity between segmented and reference objects for evaluating the optimal scale. The similarity of RAor values and RAos values representing the best segmentation results could avoid over- and undersegmentations. For TM/SPOT5, 29 detected segmented objects of interest at the scale parameter of 60 were more than the 22 segmented objects of interest at the scale parameter of 70, which indicated that the segmentation results at a certain scale parameter also carried a certain oversegmentation. Meanwhile, it was also shown in Fig. 4(a) that the similarity gap of RAor values and RAos values was about 13.5% at a scale parameter of 60. For SPOT5, 26 segmented objects of interest at the scale parameter of 110 were less than the 29 segmented objects of interest at the scale parameter of 100. Meanwhile, it was shown in Fig. 4(b) that the similarity of RAor and RAos values (RAor=32.01%, RAos=33.98%) at the scale parameter of 100 is higher than that (RAor=35.78%, RAos=33.33%) of 110, which indicated undersegmentation at a scale parameter of 110 for SPOT5.

Figure 9 illustrated the difference between the scale parameter 60,110 and the optimal scale 70,100 in classification accuracies. Overall accuracies, kappa values, and user’s accuracies of Moso bamboo and broadleaf at the optimal scale of 70 (SPOT5/TM) and 100 (SPOT5) were higher than those at 60 (SPOT5/TM) and 110 (SPOT5). From this point of view, the optimal segmentation scale was very valid.

Fig. 9

Classification accuracies for TM/SPOT5 and SPOT5.

JARS_8_1_083550_f009.png

The results demonstrated that the accuracy of the results by the integrated use of TM5 and SPOT5 was better than that of only using SPOT5 or only using TM, especially for the accuracy of the bamboo forest, which was improved by over 10% compared to TM. However, the user’s accuracy of the coniferous forest was much higher for SPOT5 (85%) than for TM/SPOT5 (67.86%). Because of the small size and dispersive distribution of the coniferous forest, poor matches between segmentation and reference objects in the coniferous forest occurred through multiscale segmentation, which resulted in its low accuracy. Therefore, for the forest types with a small area, the classification results of the integrated use of the image with the medium or low spatial resolution and the image with high spatial resolution would not be desirable. In this study, since sampling points for validation were not located for water, the user’s accuracy of water could not be calculated.

Previous studies proved that the accuracies of Moso bamboo, broadleaf, and conifer exceeded 80% for TM using a conventional pixel-based classification approach.31 However, classification accuracies of these three forest types were 65%–74% using a multiscale, object-based approach. This finding illustrated that an object-based classification method might not be appropriate for an image with medium spatial resolution such as TM in forest classification. As is known, the object-based segmentation process aggregates the adjacent pixels with similar spectral responses into an object. In a VHR image, the same entity may be represented by pixels with a high spectral variance. Therefore, image segmentation can reduce the misclassification rate of assigning the same object with high spectral heterogeneity to different classes, and can generate the better classification results in object-based classification compared with the pixel-based classification. However, the phenomenon of “same object with different spectra” for medium or low spatial resolution satellite imagery is rare and the variance of spectral responses in pixels representing the same entity is low. Additionally, incorrect segmentation in some objects results in assigning the pixels of different species into an object, thus decreasing the classification results. Therefore, better results may not be achieved using an object-based classification approach for satellite images with medium or low spatial resolution.

6.

Conclusion

This paper analyzed the synergistic use of TM images with medium spatial resolution and SPOT5 images with higher spatial resolution for improving forest classification accuracy using an object-based approach. The results showed that object-based segmentation technique is appropriate for segmenting high spatial resolution images. The best result was acquired when both TM and SPOT5 were taken into account for segmentation and classification. Although the classification accuracy of TM/SPOT5 was superior to the either the single SPOT5 or single TM image, integrated data did not improve the accuracy of forest types with a small area and dispersive distributed characteristics such as coniferous forest. Scale was one of the most critical parameters and different scales in the same segmentation scheme generated different classification results. Thus it was important to select the optimal scale parameter for segmentation. In this study, the method of measuring the topological similarity between segmented and reference objects was applied to choose the optimal scale. Based on which classification results were most satisfactory, a scale parameter of 70 was determined as optimal for both TM and SPOT5. Furthermore, the other segmentation parameters in Table 1, such as the weights of the input image layers, color/shape, and compactness/smoothness were also taken into account for examination and will be further studied in the future.

Acknowledgments

The research is supported under a grant by the Research Center of Agricultural and Forestry Carbon Sinks and Ecological Environmental Remediation, Zhejiang A & F University; Natural Science Foundation of Zhejiang province (Nos. #LR14C160001 and #LQ13C160002); National Natural Science Foundation (Nos. #31300535, #31370637, #41371411, and #41201365) and Project from innovation team for forestry carbon sequestration and measure of Zhejiang Province (No. #2012R10030-01). We would like to thank Anji Forestry Bureau for the assistance in field work.

References

1. 

Y. H. KeL. J. QuackenbushJ. Im, “Synergistic use of QuickBird multispectral imagery and LIDAR data for object-based forest species classification,” Remote Sens. Environ. 114(6), 1141–1154 (2010).RSEEA70034-4257http://dx.doi.org/10.1016/j.rse.2010.01.002Google Scholar

2. 

G. Malliniset al., “Object-based classification using Quickbird imagery for delineating forest vegetation polygons in a Mediterranean test site,” ISPRS J. Photogramm. Remote Sens. 63(2), 237–250 (2008).IRSEE90924-2716http://dx.doi.org/10.1016/j.isprsjprs.2007.08.007Google Scholar

3. 

X. C. LinY. S. Li, “Research on multi-scale segmentation based on multi-source in Chengdu plain,” Sci. Surv. Mapp. 35(4), 38–40 (2010) (In Chinese).Google Scholar

4. 

C. BurnettT. Blaschke, “A multi-scale segmentation/object relationship modeling methodology for landscape analysis,” Ecol. Modell. 168(3), 233–249 (2003).ECMODT0304-3800http://dx.doi.org/10.1016/S0304-3800(03)00139-XGoogle Scholar

5. 

G. J. Hayet al., “A comparison of three image-object methods for the multiscale analysis of landscape structure,” ISPRS J. Photogramm. Remote Sens. 57(5–6), 327–345 (2003).IRSEE90924-2716http://dx.doi.org/10.1016/S0924-2716(02)00162-4Google Scholar

6. 

T. Blaschke, “Object-based contextual image classification built on image segmentation,” in Proc. IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, pp. 113–119, IEEE, Greenbelt, MD (2003).http://dx.doi.org/10.1109/WARSD.2003.1295182Google Scholar

7. 

D. FlandersM. Hall-BeyerJ. Pereverzoff, “Preliminary evaluation of eCognition object-based software for cut block delineation and feature extraction,” Can. J. Remote Sens. 29(4), 441–452 (2003).CJRSDP0703-8992http://dx.doi.org/10.5589/m03-006Google Scholar

8. 

N. Kosakaet al., “Forest classification using data fusion of multispectral and panchromatic high-resolution satellite imageries,” in Proc. Int. Geoscience and Remote Sensing Symposium, pp. 2980–2983, IEEE (2005).Google Scholar

9. 

N. Hanet al., “Extraction of Torreya grandis Merrillii based on objeet-oriented method from IKONOS imager,” J. Zhejiang Univ. 35(6), 670–676 (2009) (In Chinese).Google Scholar

10. 

J. C. BrennerZ. ChristmanJ. Rogan, “Segmentation of Landsat thematic mapper imagery improves buffelgrass (Pennisetum ciliare) pasture mapping in the Sonoran Desert of Mexico,” Appl. Geogr. 34, 569–575 (2012).0143-6228http://dx.doi.org/10.1016/j.apgeog.2012.02.008Google Scholar

11. 

Y. M. Fenget al., “Desertification land information extraction based on object-oriented classification method,” Sci. Silvae Sin. 49(1), 126–133 (2013) (In Chinese).Google Scholar

12. 

H. Fuchset al., “Estimating aboveground carbon in a catchment of the Siberian forest tundra: combining satellite imagery and field inventory,” Remote Sens. Environ. 113(3), 518–531 (2009).RSEEA70034-4257http://dx.doi.org/10.1016/j.rse.2008.07.017Google Scholar

13. 

M. L. Clarket al., “Estimation of tropical rain forest aboveground biomass with small-footprint lidar and hyperspectral sensors,” Remote Sens. Environ. 115(11), 2931–2942 (2011).RSEEA70034-4257http://dx.doi.org/10.1016/j.rse.2010.08.029Google Scholar

14. 

Z. Z. Shanget al., “Moso bamboo forest extraction and aboveground carbon storage estimation based on multi-source remote sensor images,” Int. J. Remote Sens. 34(15), 5351–5368 (2013).IJSEDK0143-1161http://dx.doi.org/10.1080/01431161.2013.788260Google Scholar

15. 

N. Hanet al., “Integration of texture and landscape features into object-based classification for delineating Torreya using IKONOS imagery,” Int. J. Remote Sens. 33(7), 2003–2033 (2012).IJSEDK0143-1161http://dx.doi.org/10.1080/01431161.2011.605084Google Scholar

16. 

F. M. B. Van CoillieL. P. C. VerbekeR. R. De Wulf, “Feature selection by genetic algorithms in object-based classification of IKONOS imagery for forest mapping in Flanders, Belgium,” Remote Sens. Environ. 110(4), 476–487 (2007).RSEEA70034-4257http://dx.doi.org/10.1016/j.rse.2007.03.020Google Scholar

17. 

Q. Yuet al., “Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery,” Photogramm. Eng. Remote Sens. 72(7), 799–811 (2006).PGMEA90099-1112http://dx.doi.org/10.14358/PERS.72.7.799Google Scholar

18. 

D. Stowet al., “Monitoring shrubland habitat changes through object-based change identification with airborne multispectral imagery,” Remote Sens. Environ. 112(3), 1051–1061 (2008).RSEEA70034-4257http://dx.doi.org/10.1016/j.rse.2007.07.011Google Scholar

19. 

C. Lu, “Research on information extraction of WorldView-2 imagery with object-oriented technology,” MS Thesis, Zhejiang University (2012).Google Scholar

20. 

M. M. Zhang, “The method of object-oriented building features extraction from high resolution remote sensing images,” MS Thesis, Taiyuan of Technology (2012).Google Scholar

21. 

L. LianJ. Chen, “Research on segmentation scale of multi-resources remote sensing data based on object-oriented,” Procedia Earth Planet. Sci. 2, 352–357 (2011).PEPSAI1878-5220http://dx.doi.org/10.1016/j.proeps.2011.09.055Google Scholar

22. 

M. MollerL. LymburnerM. Volk, “The comparison index: a tool for assessing the accuracy of image segmentation,” Int. J. Appl. Earth Obs. Geoinf. 9(3), 311–321 (2007).0303-2434http://dx.doi.org/10.1016/j.jag.2006.10.002Google Scholar

23. 

X. G. Tian, “Object-oriented information extraction from high resolution remote sensing imagery,” MS Thesis, Chinese Academy of Surveying and Mapping (2007).Google Scholar

24. 

C. G. Li, Research on Object-Oriented Classification for Forest Covers of Remote Sensing Imagery and Its Application, China Forestry Press, Beijing (2009).Google Scholar

25. 

M. BaatzA. Schape, “Multiresolution segmentation: an optimization approach for high quality multi-scale image segmentation,” in Angewandte Geographische Informationsverabeitung. XII. Beitragezum AGIT-Symp (Applied Geographic Information Processing), T. StroblT. BlaschkeG. Griesebner, Eds., pp. 12–23, Wichmann Verlag, Karlsruhe (2000).Google Scholar

26. 

Definiens Imaging, “eCognition Developer Software: 8.7.1. User Guide,” http://www.ecognition.com/ (2012).Google Scholar

27. 

Benzet al., “Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information,” ISPRS J. Photogramm. Remote Sens. 58(3–4), 239–258 (2004).IRSEE90924-2716http://dx.doi.org/10.1016/j.isprsjprs.2003.10.002Google Scholar

28. 

N. Han, “Application of spatial information in object-based classification: a case study on delineating Torreya using IKONOS imagery,” PhD Thesis, Zhejiang University (2011).Google Scholar

29. 

C. L. Ling, 2010, “Object-based research in forest information extraction,” PhD Thesis, Kunming Univ. of Science and Technology (2010).Google Scholar

30. 

B. Z. Sun, 2011, “Multi-scale segmentation technique in high resolution image information extraction application research,” PhD Thesis, Xi’an Univ. of Science and Technology (2011).Google Scholar

31. 

H. Q. Duet al., “Bamboo information extraction based on Landsat TM data,” J. Northeast For. Univ. 36(3), 35–38 (2008) (In Chinese).Google Scholar

Biography

Xiaoyan Sun is a postgraduate student at the School of Environmental and Resources Science in Zhejiang A & F University. She focuses on object-based image analysis, forest resources monitoring using multisource remotely sensed data.

Huaqiang Du is a professor at School of Environmental and Resources Science, Zhejiang A & F University. He received a BS in forestry and an MS in forest management, both from Northeast Forestry University in China in 1999 and 2002, respectively, and a PhD from Beijing Forestry University in 2005. His research interests include digital image processing, forest resources monitoring using multisource remotely sensed data, and forest carbon estimation with remote sensing techniques.

Ning Han is a lecturer in the School of Environmental and Resources Science, Zhejiang A & F University. She received the PhD in the Institute of Remote Sensing and Information System Application, College of Environment and Resource Science, Zhejiang University, in 2011. She performs the research on land use/cover mapping, object-based image analysis, forest carbon estimation based on multisource remotely sensed data.

Guomo Zhou is a professor in the School of Environmental and Resources Science and the president of Zhejiang A & F University. He received a BS in forestry from Zhejiang A & F University, Zhejiang, China in 1982, a MS in forest management from Beijing Forestry University, Beijing, China in 1987, and a PhD in soil science from Zhejiang University, Zhejiang, China, in 2006. His research interests are forest carbon monitoring, climate change, carbon management, and sustainable forest management.

Dengsheng Lu is a professor at Zhejiang A & F University. He received a PhD in physical geography from Indiana State University in 2001. His research topics focus on land-use/cover change, biomass/carbon estimation, human-environment interaction, soil erosion, and urban impervious surface mapping.

Hongli Ge is professor in the School of Environmental and Resources Science, Zhejiang A & F University. He received both an MS and a PhD in forest management from Beijing Forestry University, Beijing, China, in 1993 and 2004, respectively. He worked in the East China Forest Inventory and Planning Institute, National Forestry Bureau during 1981 to 2001 and in the Zhejiang Forest Inventory and Planning Institute during 2004 and 2005. His research interests include growth and yield modeling, sampling techniques, digital image processing, and remote sensing applications.

Xiaojun Xu is a lecturer in the School of Environmental and Resources Science, Zhejiang A & F University. He received a BS from Fujian A&F University, Fujian, China, in 2006, an MS in forest management from Zhejiang A & F University, Zhejiang, China, in 2009, and a PhD from Beijing Forestry University, Beijing, China, in 2014. His research interest includes model-based forest carbon estimation using field inventory and ancillary data.

Lijuan Liu is a lecturer in the School of Environmental and Resources Science, Zhejiang A & F University. She majored in forest management and received her PhD in 2011. She focuses on quantitative inversion of forest parameters based on remotely sensed data, and remote sensing application for wetlands.

Xiaoyan Sun, Huaqiang Du, Ning Han, Guomo Zhou, Dengsheng Lu, Hongli Ge, Xiaojun Xu, Lijuan Liu, "Synergistic use of Landsat TM and SPOT5 imagery for object-based forest classification," Journal of Applied Remote Sensing 8(1), 083550 (29 September 2014). http://dx.doi.org/10.1117/1.JRS.8.083550
JOURNAL ARTICLE
15 PAGES


SHARE
KEYWORDS
Image segmentation

Image classification

Earth observing sensors

Landsat

Spatial resolution

Optical inspection

Accuracy assessment

Back to Top