Open Access
19 August 2017 Toward an operational framework for fine-scale urban land-cover mapping in Wallonia using submeter remote sensing and ancillary vector data
Benjamin Beaumont, Tais Grippa, Moritz Lennert, Sabine G. Vanhuysse, Nathalie R. Stephenne, Eléonore Wolff
Author Affiliations +
Abstract
Encouraged by the EU INSPIRE directive requirements and recommendations, the Walloon authorities, similar to other EU regional or national authorities, want to develop operational land-cover (LC) and land-use (LU) mapping methods using existing geodata. Urban planners and environmental monitoring stakeholders of Wallonia have to rely on outdated, mixed, and incomplete LC and LU information. The current reference map is 10-years old. The two object-based classification methods, i.e., a rule- and a classifier-based method, for detailed regional urban LC mapping are compared. The added value of using the different existing geospatial datasets in the process is assessed. This includes the comparison between satellite and aerial optical data in terms of mapping accuracies, visual quality of the map, costs, processing, data availability, and property rights. The combination of spectral, tridimensional, and vector data provides accuracy values close to 0.90 for mapping the LC into nine categories with a minimum mapping unit of 15  m2. Such a detailed LC map offers opportunities for fine-scale environmental and spatial planning activities. Still, the regional application poses challenges regarding automation, big data handling, and processing time, which are discussed.

1.

Introduction

Land cover (LC) means the composition and characteristics of land surface elements, such as vegetation, buildings, or water bodies.1 A detailed definition of LC categories and a fine-scale LC map, i.e., a small minimum mapping unit (MMU), are required for city or regional spatial planning and for the management of citizen well-being related to environmental changes.2 Examples include the monitoring of soil sealing that increases the risks of flooding3,4 and of urban green spaces that impact heat and air pollution.5,6

Despite the availability of a very rich catalog of Earth observation (EO) and vector geodata, there is no recent and complete LC information that covers the whole Walloon region of Belgium (also referred to as Wallonia). Moreover, the LC is not mapped separately from the land use (LU) and is insufficiently detailed over urban environments. The LCLU map of Wallonia [Carte d'Occupation des Sols de Wallonie (COSW)] has not been updated since 2007 whereas urbanized areas have increased by 6.2% between 2005 and 2014.7 As such, Wallonia is not complying with the INSPIRE (2007/2/EC) directive that requests distinct LC and LU products.

The fine-scale and thematically detailed characterization of complex urban LC by remote sensing requires submeter resolution data.8 Such characterization using only traditional optical imagery is challenging for the following reasons: (1) urban areas are characterized by an heterogeneous mosaic of small features9 resulting in the so-called “salt-and-pepper” effect;10 (2) different land elements share similar spectral responses, causing frequent misclassifications, such as between buildings and roads or water bodies and shadows;11 (3) vertical urban structures induce shadows and occlusions that vary with the acquisition sun/viewing angles;12,13 and (4) most submeter sensors lack the medium and thermal infrared bands that are useful for the characterization of urban materials.14

In the literature, multisource data fusion and object-based image analysis (OBIA) are considered for addressing these challenges.

A number of studies assessed the added value of either the multisensor data combination and/or the integration of ancillary geodata. First, among the multisensor approaches, such as the combination of optical and synthetic aperture radar data or the fusion of multiresolution optical data, the combination of optical data with height-related features seems one of the most promising techniques for urban LC mapping.15 Photogrammetric methods and light detection and ranging (LiDAR) data provide height information as normalized digital surface models (nDSM). Their use adds a third dimension to the LC classification for discriminating elevated objects. Moreover, being an active sensor, LiDAR is not subject to shadowing or other variations in natural light. According to the review of Ref. 15, LC mapping accuracy improves from 5% to 18% when incorporating LiDAR-derived height features to multispectral images.1618

Second, the integration of ancillary geodata enables the production of LU information based on the LC obtained by remote sensing.19 While remote sensing is often used to update vector data,2022 ancillary vector data can also successfully constrain the LC mapping to improve the results. An example of such an approach was presented in Ref. 23 in which LiDAR, aerial orthophotos (AOP), and ancillary vector data, such as buildings, water bodies, and certain impervious features, are incorporated into an OBIA approach for high-resolution LC mapping. The overall accuracy (OA) of 95% obtained is above accuracies generally reached by LC products classified using remotely sensed data only.10,13,15,21

OBIA is more efficient than pixel-based methods for classifying the urban LC from submeter remotely sensed data.10,11,24,25 By focusing on relatively homogeneous segments rather than on pixels, OBIA not only creates additional object-based spectral information but also offers the possibility of calculating contextual, geometrical, and textural features that facilitate classification. Two widespread OBIA approaches are distinguished in this paper: (a) the first one, which we call the OBIA automated, segments the images into meaningful objects and then classifies them using supervised machine-learning classifiers, such as random forest (RF)26 or support vector machines (SVMs)27 and (b) the second one, which we call the OBIA-rule based, uses expert knowledge to define hierarchical rules that simultaneously segment and classify images.21,23

LC mapping using multisource data and a high number of object features increases the volume of data to be processed and therefore challenges a regional application. A recommended solution to deal efficiently with big data could be parallel processing,23 i.e., the division of LC feature extraction tasks among multiple cores for reducing processing time.

In addition, to ensure operational readiness, repeatability and automation of the method are also required. As highlighted by Du et al.,28 there is a need for integrated processing chains that combine different necessary image processing modules to derive useful LCLU maps. These modules should be well-documented and should allow an efficient processing of various EO and ancillary data sources to ensure the repeatability of the method and the possibility of applying it to different urban environments. Since we want to move toward automation, this implies limiting the manual intervention of the operator and strengthening processing efficiency. Recently, automated solutions that avoid the tedious and time-consuming trial and error required for tuning segmentation parameters and selecting the best-performing classifiers have appeared.2932 However, certain tasks remain time-intensive and subjective, e.g., calibrating classification rules in OBIA-rule-based approaches or generating a large set of training samples that are statistically independent, class-balanced, and representative of the target classes in OBIA-automated approaches.26

This study aims at developing an operational and integrated mapping framework to update the LC map of Wallonia (16,844  km2) and to increase its thematic and spatial levels of detail using existing geodata, i.e., data that were not acquired for the purpose of the study. The best set of existing geodata and the best OBIA classification method were identified within these constraints. We proceeded with three subtasks: (1) testing the suitability of two state-of-the-art OBIA classification methods on a 25-km2 subset, (2) introducing various existing EO and ancillary vector data in the process and assessing their added value in terms of mapping accuracy and visual map quality, and (3) applying the methods to the larger area of Liège (261  km2) as an indication of their transferability to the whole Walloon region. The research leading to this paper was carried out for and with the active support of various public stakeholders, primarily to ensure compliance with the EU INSPIRE directive mapping obligations.

Regarding the general structure of the paper, Sec. 2 describes the study area, the data, and the two methods proposed. The classification results are presented in Sec. 3, moving from the local to the regional scale. They are discussed in Sec. 4 with regards to the three subobjectives. A conclusion is presented in Sec. 5.

2.

Materials and Methods

2.1.

Study Area

This paper discusses the development of an operational urban LC mapping method that is applicable to the whole Walloon region (16,844  km2). In collaboration with end-users, we defined two subsets that are representative of the Walloon urban dense landscape (Fig. 1).

Fig. 1

Liège study area and samples for 2012. The blue border delineates the 25-km2 representative area (zone A) used for methods development. The red border delineates the 261-km2 area (zone B) used for transferability assessment. The validation samples are represented by purple squares. The training samples for the OBIA-automated approach are represented by yellow dots. A true-color composite of aerial photographs from 2012 is used as background.

JARS_11_3_036011_f001.png

Zone A is a 25-km2 square area selected for the development of the two methods and for identifying the data and features of interest. Zone B is used to assess the regional transferability of the chosen methods. It corresponds to a 261-km2 subset that includes the city of Liège and eight other municipalities. Zone B demonstrates a strong urbanization with 55% of artificialized areas. This value is well above the Walloon region’s mean of 15%.33 The population density of 1663 inhabitants per km2 is largely higher than the Walloon average of 213.8 inhabitants per km2.34 Zone B covers a large diversity of urban morphologies ranging from isolated house neighborhoods to 10+ floor buildings. Residential, industrial, and commercial areas cover 55% of the area. Crops and pastures account for 30%, seminatural spaces and forests for 13%, and water bodies for 2%.33 Given the inner complexity of urban areas,17 we believe that applying our method first to such dense urban areas and then to rural ones in a later stage is more straightforward than the inverse approach.

2.2.

Data

An operational LC mapping method should rely on routinely acquired EO data to minimize the acquisition costs and to exploit existing data as recommended by the European legislation on reuse of public sector information (PSI directive; 2003/98/CE, consolidated by 2013/37/EU). In this perspective, this study valorizes existing data even though their acquisition conditions were not always the most relevant for the discrimination of LC classes. Wallonia, as many other EU regions, acquires remotely sensed data that are particularly interesting for LC mapping. In our comparisons, we integrated three available remotely sensed datasets: (1) visible–near-infrared (VNIR) AOP (4 bands) with a spatial resolution of 0.25 m after pan-sharpening of panchromatic (PAN: 0.25 m) and multispectral (XS: 0.75 cm) bands (leaf-on: May 2012) along with a 1-m resolution nDSM generated by photogrammetry;35 (2) VNIR Pléiades satellite data with a spatial resolution of 2 m for the XS bands and of 0.5 m for the PAN band (leaf-on: August 2013); and (3) the digital terrain model (DTM) and the digital surface model (DSM) generated by the Walloon authorities using the low-resolution LiDAR data acquired during the winter of 2012/2013 (leaf-off: 1 to 3 points per m2).

Ancillary vector data consist of the polygonal delineations of building footprints available in the BelMap database (GIM, 2015) and of fresh water courses and water bodies, road networks (i.e., aggregated polygons of the travel lane, sidewalk, planting strip, and on-street parking) and rail networks available in the Projet Informatique de Cartographie Continue (PICC, in French) database.36 Produced using a photogrammetric method, the PICC is the Walloon digital cartographic reference at 1:1000 scale. It includes landscape elements, such as buildings, networks, infrastructures, and isolated trees, with a planimetric accuracy of 12 cm and an altimetric accuracy of 25 cm. We chose the BelMap database for delineating buildings because it was updated in 2015, due to various commercial datasets, while the PICC has not been updated since the 1990s in some parts of the region. The update of the PICC is currently being carried out by the regional authorities who consider this dataset the future spatial and thematic reference, including for buildings. This will guarantee the availability of up-to-date polygonal data that will be used for mapping the LC.

2.3.

Methods

2.3.1.

Data preprocessing

The Walloon Public Service provided pan-sharpened AOP as a regional mosaic with full geometric (accuracy <0.5  m) and radiometric corrections. The photogrammetric nDSM provided by Claessens et al.35 was resampled to 0.25 m using cubic interpolation.

Radiometric calibration, pan-sharpening, and orthorectification of the raw Pléiades data were performed inside the ENVI/ArcGIS environments. LiDAR DTM and ground control points taken on AOP were used for orthorectification. We obtained a root-mean-square error smaller than 1 pixel.

The LiDAR nDSM layer was computed by subtracting the DTM from the DSM. It was resampled using cubic interpolation to match the spatial resolutions of the AOP and Pléiades data.

2.3.2.

Classification scheme and minimum mapping unit

The classification scheme and the MMU were defined according to the needs and feedbacks expressed by a panel of Walloon environmental and spatial planning stakeholders. The classification scheme comprises two hierarchical levels (Table 1).

Table 1

Two-level LC classification scheme.

Level 1Level 2Abbreviation
Artificial surfacesBuildingsBU
Asphalt surfacesAS
Rail networkRA
Bare soilsBare soilsBS
VegetationLow vegetation (<1  m)LV
Medium vegetation (1 to 7 m)MV
High vegetation deciduous (>7  m)HVD
High vegetation coniferous (>7  m)HVC
Water bodiesWater bodiesWB
ShadowsShadowsSH

Five classes characterize level 1: artificial surfaces, bare soils, vegetation, water bodies, and shadows. Level 2 distinguishes buildings, asphalt surfaces at ground level, rail network, and vegetation. The vegetation class of level 1 is subdivided at level 2 according to its height and type into low, medium, and high deciduous and coniferous classes. Shadows are temporary and will be removed either directly due to the processing performed in the OBIA-rule-based approach or in a later stage in the OBIA-automated approach due to a postprocessing using contextual rules that is still under development.

The reference MMU for building representation in the PICC is 15  m2. We used this MMU as it meets end-users’ expectations in terms of details and allows the detection of most anthropic and natural features.

2.3.3.

Sampling scheme and accuracy assessment

Sampling consists of three steps: (i) a stratified random point sampling inside the ancillary vector data, (ii) a random point sampling with visual interpretation on the AOP (2012) for the other LC classes, and (iii) an update of these samples for processing the Pléiades (2013) data.

In step (i), a set of samples are selected within the ancillary vector data of buildings, water courses, water bodies, and road and rail networks. The stratified random sampling scheme ensures a minimum sample size in each class.21 This minimum is set to 60 points per class for training and 30 points per class for validation. A visual interpretation was performed to remove misregistration cases or points demonstrating LC changes due to a difference in date between ancillary vector data and AOP. This visual interpretation was performed by an operator who has expert knowledge of the study area.

In step (ii), a random sampling was performed for the shadows, the four level 2 vegetation classes, and the bare soils. The labeling of the random samples was manually performed by the visual interpretation of the AOP.

By grouping (i) and (ii), we obtained a total set of 937 samples.

In step (iii), we updated the sample labels and coordinates by visual photointerpretation to match the ground truth depicted in the Pléiades data (August 2013).

For the tests carried out on the 25  km2 of zone A, the training and validation samples were taken in two different zones to limit the potential spatial autocorrelation (SA).37 Five hundred and forty-nine samples were used for training the classifiers in the OBIA-automated approach and were defined outside zone A. The 388 samples left, inside zone A, were used for validation purposes (Fig. 1).

For the application to the 261  km2 of zone B, the training and validation samples were redistributed among the whole area. We kept the same distributions of samples per class and the same ratio between training and validation.

For the classifiers in the OBIA-automated approach, we automatically selected the segments containing a training point to create the training objects set. The advantage of this strategy is that the same labeled set of points can be used with different segmentation results.

The validation points were used for accuracy assessment in both the OBIA-automated and the OBIA-rule-based classification approaches.

Accuracies were assessed using confusion matrices. User’s accuracy, producer’s accuracy, OA, and Kappa (K) index38 were calculated. McNemar’s nonparametric test was computed to assess the statistical significance of the differences between classifications, compared pairwise.39

The expression of McNemar’s test is

Eq. (1)

χ2=(f12f21)2(f12+f21),
where f12 represents the frequency of samples correctly classified in the first classification but incorrectly classified in the second one and f21 is the frequency of samples correctly classified in the second classification but incorrectly classified in the first one.

2.3.4.

OBIA-automated classification

The OBIA-automated processing chain, first introduced by Grippa et al.,40 consists of a suite of modules developed in the open-source software Geographic Resources Analysis Support System Geographical Information System (GRASS GIS). The chain uses the GRASS Python scripting library to combine the modules and is implemented in a “Jupyter notebook” that enables efficient documentation, publication, and sharing of the computer code. The code is available on a dedicated Github repository (Ref. 41). The processing chain consists of six main steps.

Step 1 covers data preparation and preprocessing: (a) the definition of the working environment, i.e., the definition of the workspace and of the projection system, the creation of the GRASS GIS mapsets, the download of the needed libraries, etc; (b) the import of the raw EO and ancillary vector data; and (c) the derivation of various indices. Currently, the processing chain allows computation of the normalized difference vegetation index (NDVI),42 commonly used for vegetation discrimination; the normalized difference water index (NDWI),43 reported to enhance the detection of water; a brightness index (sum of the visible bands); and a texture index using the “r.texture” algorithm.44,45 This list of indices will hopefully grow in the coming months as more diverse EO data sources are processed by the authors, e.g., hyperspectral APEX and Sentinel-2 data, and as the community contributes to the development of the code. At the time of the production of the results within this research, only the NDVI index was implemented. We used it, according to the OBIA-automated tests in Sec. 3.1, as additional information to the raw input data for performing the segmentation (steps 2 and 3) and/or for computing objects’ statistics (step 4) prior to the classification (step 5).

Step 2 uses the new GRASS GIS add-on i.segment.uspo for tuning the “threshold” parameter of the segmentation (see step 3).46 This tuning allows a more objective and automated parameter’s definition than the usual tedious and time-consuming “trial-and-error” approach relying on the visual assessment of several naïve segmentation results and on the iterative testing of varying segmentation parameters.32 i.segment.uspo is an unsupervised optimization method, so it is able to assess the quality of a segmented image without the need for a reference map or for a priori knowledge, which is a major advantage compared with the supervised empirical methods.31 i.segment.uspo relies on tuning functions combining measures of intrasegment homogeneity [weighted variance (WV)] and intersegment heterogeneity (SA). To compare different segmentations, both WV and SA measures are normalized using the following equation:

Eq. (2)

F(x)=XmaxXXmaxXmin,
where F(x) is the normalized value of WV (or SA), X is the WV (or SA) value of the current segmentation result, and Xmax and Xmin are the highest and lowest values of WV (or SA), respectively, for the whole stack of segmentation results assessed.

Two tuning functions can be used to combine the WV and SA measures: either a simple sum of the normalized criteria as defined by Espindola et al.29 or the F-function defined by Johnson et al.31 The expression of the F-function is

Eq. (3)

F=(1+α2)ASnorm×WVnormα2×ASnorm+WVnorm,
where F is the “overall goodness,” ranging from 0 (poor quality) to 1 (high quality). F synthetizes the quality of the segmentation while the α parameter varies to give more weight to WV or to SA.

To define the segmentation parameters in this study, we ran the i.segment.uspo module on a subset of 10 distinct areas of 300×300  m distributed across the diversity of urban environments present in our study area (zone B at Fig. 1). According to the visual check of the preliminary tests, we observed minor changes between the two functions’ outputs. Using the simple sum resulted in undersegmentation. To avoid it, we used the F-function and set the α parameter at 1.25. Some recent studies argue that oversegmentation is a minor issue that could be corrected during classification, which is not the case of undersegmentation.4749

Step 3 consists of running the i.segment module of GRASS GIS50 that implements image segmentation using a region-growing algorithm (an experimental mean-shift algorithm has been recently implemented). The region-growing segmentation uses two parameters: (a) a standardized “threshold” parameter below which segments are merged according to spectral similarity between neighboring objects (tuned in step 2) and (b) a “minsize” parameter entered by the operator that sets the minimum size of segments. Here, we used our definition of the MMU of 15  m2 to set the minsize parameter.

In step 4, the i.segment.stats add-on computes objects statistics used in the classification process.51 They include the minimum, maximum, mean, range, standard deviation, sum, and median spectral statistics extracted from EO data, as well as the area, perimeter, and compactness morphological attributes. This fourth step was sped up using the new r.object.geometry add-on.52 This add-on allows working in raster format, hence, avoiding the vectorization and saving time.

Based on the object statistics, step 5 uses the v.class.mIR GRASS GIS add-on, built with the “Caret” library of the R software, to perform scene classification.53 At the time of the elaboration of the analyses presented here, this add-on implemented the following machine-learning classifiers: RF, SVM radial (SVMRadial), recursive partitioning (Rpart), and k-nearest neighbors (k-NN) (since then C5.0 and XGBoost have been added). The add-on automatically tunes the parameters of each classifier using repeated cross validation on the training samples.

v.class.mIR also provides the possibility of using four voting systems to combine the predictions of the individual classifiers:54 (a) the simple (unweighted) majority vote that retains the most frequent prediction, (b) the simple weighted vote (SWV) where the weighting corresponds to the accuracy of the individual classifiers estimated through the cross validation, (c) the best-worst weighted vote (BWWV) where a zero weight is attributed to the worst performing classifiers and a linear weighting is applied to the other classifiers, and (d) the quadratic BWWV with the same design as BWWV with the difference that a squared function weighting is applied to the best-performing classifiers.

Finally, step 6 performs the accuracy assessment using confusion matrices and the K index generated by the r.kappa tool.55

The rail network, requested by the end-users, is excluded from this classification approach because the first results showed a high level of confusion with asphalt surfaces and bare soils. This class did not change much over time and will be integrated using ancillary vector data in a later stage.

2.3.5.

OBIA-rule-based classification

Using works of O’Neil-Dunne et al.23 and Van De Kerchove et al.56 as references, we developed an LC mapping ruleset in eCognition Developer 9.2. The general structure is divided into two parts (Fig. 2). Part (a) uses existing ancillary vector data for some LC classes and checks their consistency by comparing them with the EO data. Part (b) updates these classes by adding objects that are not covered by the ancillary vector data and classifies the other classes.

Fig. 2

Structure of the ruleset of the hierarchical LC classification: (a) ancillary vector data processing and (b) classification of areas outside ancillary vector data.

JARS_11_3_036011_f002.png

For the sake of transferability, rules are kept as simple as possible using normalized and well-documented indices and criteria. The classification integrates four spectral indices derived from AOP VNIR data: the NDVI,42 the NDWI,43 the blue-NDVI (bNDVI),21 and the spectral shape index (SSI).17 The referenced authors reported that bNDVI eases the discrimination between bare soil and asphalt surfaces and that SSI is useful for distinguishing water from shadow.

Part (a) first integrates and second updates the ancillary vector data. First, a coarse chessboard segmentation and class assignment rules integrate ancillary vector data.23 Second, changes are detected. The removal of buildings is detected using nDSM and NDVI thresholds. NDWI is used to flag potential changes from initial water bodies. The changed areas are then reinjected into the classification process during part (b). Road and rail networks are considered stable in time. They are not updated in the process.

Part (b) classifies areas outside ancillary vector data or identified as changed. An NDVI threshold distinguishes vegetation from artificial areas, grouping pixels of each category into objects. An nDSM threshold subdivides the vegetation into low, medium, and high vegetation objects. The small low and/or medium height vegetation objects surrounded by higher strata are aggregated into medium/high vegetation classes. An SSI index threshold further distinguishes deciduous trees from conifers. A 2.5-m nDSM threshold reclassifies artificial areas into ground and elevated building objects. This threshold value avoids the meaningless detection of low-height features, such as cars on parking lots.57 New buildings are detected. Cleaning using shape and context features erases elongated objects caused by the small positional shift between the vector and the EO data. An additional step for the subdivision of the buildings into low and small, low and large, and high buildings is implemented as an indication for the future LU information derivation. A combination of bNDVI, NDWI, SSI thresholds, and contextual criteria reclassifies ground features into asphalt surfaces, bare soils, and new water bodies.

Additionally, the ruleset was adapted to run without ancillary vector data. A classification was performed using this adapted ruleset to assess the gain in accuracy potentially offered by the addition of the different ancillary datasets. The main changes include the use of the NDWI and SSI for the delineation of water bodies at the beginning of the ruleset.

Technically, all tests were run on an i7-4870HG CPU @ 2.50 GHz, 16-GB RAM, 4-cores computer. Processing times were calculated for each test in both the OBIA-rule-based and the OBIA-automated approaches. Application to zone B area was performed using eCognition Server.

3.

Results

This chapter contains four sections. Sections 3.1, 3.2, and 3.3 present and compare the main results obtained for the application of both OBIA approaches to zone A (Fig. 1). Section 3.4 presents the application of the best-performing test of both approaches to zone B (Fig. 1).

3.1.

OBIA-Automated Approach Tested on Zone A

We carried out eight tests to evaluate the performance of the OBIA-automated processing chain and the added value of multisource EO and ancillary vector data (from A to H in Table 2). Varying the input data for the segmentation and the classification steps enabled us to test the added value not only of the AOP versus Pléiades data but also of the NDVI, nDSM, and ancillary vector data integration. The accuracy assessment for the four classifiers and for the SWV is presented in Table 2. Using the SWV, the decision of each classifier is weighted according to its estimated accuracy. Only the SWV is presented as it provides the highest accuracies of all the voting schemes tested in this study.

Table 2

OBIA-automated accuracy assessment: K and OA at the classification level 1 and 2 for the eight tests.

TestInput dataIndividual classifiersVote
SegmentationClassificationk-NNRpartSVMRadialRFSWV
AAOP (VNIR)L1K0.64640.73140.86840.86730.8747
OA0.75520.81440.90720.90720.9124
L2K0.29880.50460.62860.62270.6198
OA0.37870.55930.67010.66490.6624
BPléiades (VNIR)L1K0.45640.64240.70110.68100.6956
OA0.60050.74230.78610.77060.7809
L2K0.29350.46380.56120.55350.5588
OA0.37370.52320.61080.60310.6082
CAOP (VNIR + NDVI)L1K0.68950.84000.85640.87880.8792
OA0.78090.85660.89950.91490.9149
L2K0.40270.61790.63730.66060.6669
OA0.47160.65980.67780.69850.7036
DAOP (VNIR)AOP (VNIR + NDVI) + AOP nDSML1K0.75060.82730.86530.89060.8941
OA0.82470.87890.90460.92270.9253
L2K0.43690.70640,70650.76760.7617
OA0.50260.73970.73970.79380.7887
EAOP (VNIR + NDVI) + LiDAR nDSML1K0.69480.82960.87590.88670.8865
OA0.78350.88140.91240.92010.9201
L2K0.45620.73830.77020.79640.8051
OA0.51800.76800.79640.81960.8273
FAOP (VNIR + NDVI) + AOP nDSML1K0.70630.82530.87220.88310.8907
OA0.79380.87630.90980.91750.9227
L2K0.45860.73320.75290.78760.8023
OA0.52060.76290.78090.81190.8247
GPléiades (VNIR + NDVI) + LiDAR nDSML1K0.41140.74060.80950.82030.8274
OA0.57990.81440.86340.87110.8763
L2K0.38900.61700.71860.74200.7391
OA0.45880.65980.75000.77060.7680
HAOP (VNIR + NDVI) + LiDAR nDSM + ancillary vector dataL1K0.75600.85550.86840.88250.8825
OA0.82730.89950.90720.91750.9175
L2K0.51970.76710.76410.80200.8107
OA0.57470.79380.79120.82470.8325
Note: The numbers in bold indicate the best accuracies per test. Tests B and G were carried out on the Pléiades imagery. Test H was carried out with the ancillary vector data.

The best accuracy results are achieved for test E in which the AOP, NDVI, and LiDAR nDSM were used as input to both segmentation and classification. OA of 0.83 and 0.92 and K of 0.81 and 0.89 are obtained at level 2 and 1, respectively, using the SWV voting scheme. The classification of buildings achieves good results, with K higher than 0.9. The confusion matrices reveal that the main misclassifications occur between the bare soil and the asphalt surfaces and between the vegetation classes.

The segmentation that integrates multisource EO data generates objects that accurately delineate the true shape of LC elements (test E). It also provides significantly higher accuracies than test D, which combines these data only at the classification step and segments only the VNIR data (+0.04 of OA and +0.05 of K at level 2 for SWV vote between tests D and E).

Overall, the best-performing classifier is RF. Its performance is closely followed by SVMRadial, which even outperforms RF for both tests that only use VNIR data (A and B). The Rpart and k-NN classifiers provide generally significantly lower accuracies and a lower visual quality of the map than RF and SVMRadial. The SWV vote either improves slightly or does not modify the accuracies obtained by individual classifiers.

The OA and K index for all tests are summarized in Fig. 3, which shows the added value of the different input datasets.

Fig. 3

General representation of OA and K index for tests running on AOP with tests (A) only AOP VNIR, (C) AOP VNIR + AOP NDVI, (E) AOP VNIR + AOP NDVI + LiDAR nDSM, (F) AOP VNIR + AOP NDVI + AOP nDSM, and (H) AOP VNIR + AOP NDVI + LiDAR nDSM + ancillary vector data. OA and K for SWV vote at (a) classification level 1 and (b) level 2.

JARS_11_3_036011_f003.png

In Fig. 3, to assess the best combination of input data, all tests are compared with test A, which classifies only the AOP multispectral info and reaches an OA of 0.91 and a K index of 0.87 at level 1 and of 0.66 and 0.62, respectively, at level 2 (test A).

By comparing tests C and A, we show that the NDVI allows a gain in accuracy of +0.05 for OA and +0.04 for K at level 2, which is not significant according to McNemar’s test (p-value=0.1138). There is also no improvement at level 1.

At level 2, the integration of the LiDAR nDSM (test E) achieves higher OA and K of +0.14 and +0.13, respectively. It drastically improves the mapping of buildings with a class-specific K index value rising from 0.6 without height information (test C) to 0.9 when LiDAR nDSM is used (test E). Improvements due to the addition of the LiDAR nDSM are statistically significant (p-value<0.001) compared with the results relying on AOP VNIR + NDVI (test C). Again, the nDSM does not improve the results at level 1. The LiDAR- and the AOP-derived nDSM achieve similar accuracy results. The delineation of elevated features is slightly enhanced using AOP nDSM due to the exact spatial matching between spectral data and height above ground information. Using the LiDAR nDSM as a reference, a 0.47±4.65-m mean height difference and standard deviation with the AOP nDSM are measured using all validation and training samples. The main height differences are observed for the shadows, the high vegetation, and the water bodies classes. The vegetation growth explains most of the height differences in the vegetated areas. Changes in water levels and artifacts caused by scattering of the LiDAR signal on water surfaces explain the height differences over water bodies. The differences in the shadow class are explained by the small positional shift between both nDSM datasets due to varying camera/plane acquisition angles. Interestingly, the height difference and standard deviation for the buildings class are the smallest of all classes (0.04±1.47  m).

As in the OBIA-rule-based approach (see below), the inclusion of ancillary vector data cleans the delineation of objects (test H). However, it does not significantly improve the OA and K values. This means that most misclassifications occur in areas located outside these database features.

The added value of submeter satellite Pléiades data is assessed in comparison with AOP in Table 3 and Fig. 4.

Table 3

Accuracy assessment for both OBIA-rule-based tests: (i) excluding or (ii) including in bold ancillary vector data: K and OA at classification levels 1 and 2.

iEO data only (no ancillary vector data)L1K0.9066
OA0.9398
L2K0.8421
OA0.8624
iiEO data and ancillary vector dataL1K0.9334
OA0.9570
L2K0.8784
OA0.8940

Fig. 4

Comparison of AOP and Pléiades data: (a) AOP 2012, (b) OBIA-automated using AOP VNIR only (test A, SWV vote), (c) OBIA-automated using AOP VNIR + NDVI and LiDAR nDSM (test E, SWV vote), (d) Pléiades 2013, (e) OBIA-automated using Pléiades VNIR only (test B, SWV vote), and (f) OBIA-automated using Pléiades VNIR + NDVI and LiDAR nDSM (test G, SWV vote). Legend: BU, buildings; AS, asphalt surfaces; BS, bare soils; LV, low vegetation; MV, medium vegetation; HVD, high vegetation deciduous; HVC, high vegetation coniferous; WB, water bodies; and SH, shadows.

JARS_11_3_036011_f004.png

Using only VNIR spectral bands, the Pléiades classification (test B) produces OA and K index values ±0.05 lower at level 2 and ±0.15 lower at level 1 than the comparable test on the AOP map (test A). Visually, the water bodies and the shadows classes are largely overrepresented in the Pléiades LC map [Fig. 4(e)]. This can be explained by intense sunlight reflection on water bodies leading to confusion with artificial surfaces [Fig. 4(d)]. The shadows also appear less contrasted on the Pléiades imagery than on the AOP. The integration of the LiDAR nDSM significantly improves the accuracies and the visual quality of outputs. It corrects most water misclassifications in test G [Fig. 4(f)]. Accuracy wise (Table 3), higher OAs are still obtained for test E using the AOP in comparison to test G using Pléiades.

3.2.

OBIA-Rule-Based Approach Tested on Zone A

The urban LC mapping results using the OBIA-rule-based approach with and without ancillary vector data are shown in Fig. 5, and the accuracy measures are presented in Table 3.

Fig. 5

Comparison of the two OBIA-rule-based approaches: (a) AOP 2012, (b) OBIA-rule-based approach running on EO data only (without ancillary vector data), and (c) OBIA-rule-based approach running on EO and ancillary vector data. Legend: BU, buildings; AS, asphalt surfaces; BS, bare soils; LV, low vegetation; MV, medium vegetation; HVD, high vegetation deciduous; and HVC, high vegetation coniferous.

JARS_11_3_036011_f005.png

Table 3 shows an OA and K index above 0.90 at level 1 and above 0.84 OA and K at level 2. The misclassifications mainly occur between the subclasses at level 2 and between the four vegetation classes. The addition of ancillary vector data improves the OA by +0.02 at level 1 and by +0.03 at level 2. At classification level 2, such an improvement is statistically significant, with a McNemar p-value of 0.02178 (with a significance level of 0.05). This addition mostly improves the classification of asphalt surfaces increasing its specific K by +0.13. Interestingly, buildings have a K of 0.97 whether or not ancillary vector data are used. Confusion exists between medium and high vegetation (omission error of 27%) and between deciduous and coniferous vegetation (omission error of 23%).

The addition of ancillary vector data improves the visual quality of the LC map with an enhanced and smoother delineation of objects [Fig. 5(c)]. However, both maps provide valuable information. The LC map produced without ancillary vector data [Fig. 5(b)] is a direct source of information on tree canopy cover. This information may be useful for urban green monitoring. The map produced with ancillary vector data [Fig. 5(c)] keeps information about ground features that are occluded by trees or buildings in the EO data. Therefore, it appears more suitable for GIS application as it eases the link with existing geodatabases. However, constraining the classification using slightly outdated data generates some loss of information. For example, the newly built roundabout is not classified in Fig. 5(c) but well mapped in Fig. 5(b). Both information produced with and without the ancillary data could be computed for inclusion in a database.

Semantically, the OBIA-rule-based approach allows the automatic classification of the shadows using contextual rules. As an example, the shadows detected alongside the buildings were reclassified to the ground-level class (asphalt surface, bare soil, low vegetation, or water) with which they shared the longest border. The approach that does not make use of ancillary vector data does not allow classifying rail network, as rails consist of a complex mix of asphalt surfaces and bare soils.

The visual and statistical analyses of results show a number of mistakes that are not acceptable in an operational context, such as confusion between bridges and buildings or turbulent water surfaces misclassified as asphalt surfaces. Some postprocessing operations will be required to improve the results.

3.3.

Comparison of OBIA Approaches on Zone A

Figure 6 compares the results of both methods with similar input data, i.e., the OBIA-automated approach using the AOP VNIR, NDVI, and LiDAR nDSM data and the OBIA-rule-based approach running without the ancillary vector data. Table 4 details the corresponding confusion matrices. All validation points falling into the shadows were excluded from the OBIA-automated approach accuracy assessment for comparison purposes.

Fig. 6

Comparison of the LC for both OBIA approaches: (a) AOP 2012, (b) OBIA-automated using AOP VNIR + NDVI and LiDAR nDSM (Table 2, test E, SWV vote), and (c) OBIA-rule-based approach running with the EO data without ancillary vector data (Table 3, test i). Legend: BU, buildings; AS, asphalt surfaces; BS, bare soils; LV, low vegetation; MV, medium vegetation; HVD, high vegetation deciduous; HVC, high vegetation coniferous; WB, water bodies; and SH, shadows.

JARS_11_3_036011_f006.png

Table 4

Detailed accuracy assessment for both OBIA approaches: (a) OBIA-automated with AOP VNIR + NDVI and nDSM LiDAR (Table 2, test E, SWV vote) and (b) OBIA-rule-based using EO data without ancillary vector data (Table 3, test i): confusion matrices, user’s, producer’s OA, and K at classification level 2.

Reference data
Classified dataBUASLVMVHVDHVCBSWBUser’s
(a) OBIA-automated using AOP (VNIR + NDVI) and LiDAR nDSM (Table 2, test E, SWV vote)
Buildings33392%
Asphalt surfaces247784%
Low vegetation392487%
Medium vegetation6371180%
High vegetation deciduous1926465%
High vegetation coniferous83983%
Bare soils193075%
Water bodies34100%
Producer’s92%80%85%77%74%91%71%100%285
OA83%
K81%
(b) OBIA-rule-based using EO data without ancillary vector data (Table 3, test i)
Buildings33197%
Asphalt surfaces3545679%
Low vegetation46490%
Medium vegetation36100%
High vegetation deciduous10351064%
High vegetation coniferous213392%
Bare soils153385%
Water bodies31100%
Producer’s89%90%100%75%97%77%79%84%301
OA86%
K84%

Visually, the current implementation of the OBIA-rule-based approach provides better results than the OBIA-automated one (Fig. 6). The former already includes shadow reclassification and avoids the overrepresentation of water bodies. It also offers an automated solution for the removal of many small misclassified objects due to contextual rules; these include misclassified urban objects or shadows interpreted as conifers in deciduous forests. Such improvements could be included in the future development of the OBIA-automated approach through postprocessing or additional mapping steps.

Statistically, these visual improvements are reflected by overall higher OA and K index values (Table 4). McNemar’s test shows that they can be considered significant (p-value=0.0028).

The technical advantages and disadvantages of each method will be developed in Sec. 4. However, processing times have been calculated and can be synthetized as follows: the OBIA-automated approach shows an average processing time of 5-min per km2 running at 0.5-m spatial resolution (Pléiades) and of 20- to 30-min per km2 running at 0.25-m resolution (AOP) using a personal computer. The OBIA-rule-based approach shows an average processing time below 2 min per km2 using ancillary vector data and below 1-min per km2 running without vector data using a personal computer.

3.4.

Application to Zone B

According to Tables 2 and 3, the two best-performing tests in each approach at level 2 are the OBIA-automated test using AOP, NDVI, LiDAR nDSM, and ancillary vector data (see Table 2, test H) and the OBIA-rule-based approach relying on the ancillary vector data (see Table 3, test ii). In this section, these two methods have been applied to the 261-km2 area of Liège. Table 5 details the accuracy assessment for these two tests.

Table 5

Accuracy assessment for the two tests for the Liège area: (a) the OBIA-automated approach using AOP + NDVI + LiDAR nDSM + ancillary vector data and (b) the OBIA-rule-based approach relying on ancillary vector data: K and OA at classification level 1 and 2.

aOBIA-automated with AOP + NDVI + LIDAR nDSM + ancillary vector data—SWV voteL1K0.8983
OA0.9268
L2K0.8633
OA0.8790
bOBIA-rule-based (EO and ancillary vector data)L1K0.9027
OA0.9357
L2K0.8768
OA0.8929

The analysis of Table 5 shows that (1) OA and K values are 4% to 5% higher for the OBIA-automated carried out on zone B rather than on the smaller zone A (see Table 2, test H and Table 5) and (2) OA and K for the OBIA-rule-based approach at level 1 are lower than those obtained for zone A (Table 3, test ii and Table 5). The gain in (1) probably results from the new distribution of training and validation samples. This new distribution improves their representativeness. The decrease in accuracy in (2) is explained by the definition of thresholds. Indeed, the threshold values have been fine-tuned for zone A, and no updating was performed for zone B. The latter covers a larger diversity of LC surfaces that impacts threshold tuning. The differences in results between the two approaches at level 1 and 2 are not statistically significant.

As an illustration, Fig. 7 presents the LC map resulting from the OBIA-rule-based approach with ancillary vector data. Three typical urban neighborhoods of Liège are shown with (a) new isolated or adjacent houses, (b) urban redevelopment in front of the Guillemin rail station in the center of Liège, and (c) industrial and commercial areas expansion in the Sart-Tilman area. The surface statistics per class for the Liège area are presented on the same figure. Impervious surfaces cover 29.3% of the area. At level 2, the most represented LC is low vegetation with 28.8%.

Fig. 7

LC map for the Liège area using the OBIA-rule-based approach with ancillary vector data (1/125,000), surface statistics, and zooms: (a) on a new urban neighborhood in Grâce-Hollogne, (b) on the Guillemin rail station in the center of Liège, and (c) on the extension of the Sart-Tilman industrial and commercial area (1/10,000). Legend: BU, buildings, New BU, new buildings; AS, asphalt surfaces; RA, rail network; LV, low vegetation; MV, medium vegetation; HVD, high vegetation deciduous; HVC, high vegetation coniferous; BS, bare soils; and WB, water bodies.

JARS_11_3_036011_f007.png

4.

Discussion

The discussion is structured according to the three subtasks listed in the introduction:

  • 1. We assessed two OBIA classification methods for producing a fine-scale urban LC map. We will then qualify their performances with regards to the literature.

  • 2. We tested the contribution of all sources: two multispectral EO data, various indices, two nDSM, and ancillary vector data. The added value of each will be further discussed.

  • 3. We tested the methods on a 25-km2 subset (zone A) and applied them to a 10 times wider area that includes the city of Liège (261  km2, zone B). The application to the 674 times larger Walloon region (16,844  km2) raises new challenges. Various aspects need to be taken into account, such as data volume, data availability, acquisition costs, or processing capabilities.

4.1.

Performance of the OBIA Approaches

We compared two very distinct OBIA solutions for urban LC mapping. First, we developed an OBIA-automated approach that uses algorithms to semiautomatically segment and then classify EO data according to training samples in two separate steps. The manual intervention is limited to choosing segmentation parameters according to i.segment.uspo results and to the development of the sample sets. In agreement with Ref. 58, i.segment.uspo facilitates the tuning of i.segment. The generated objects are of good visual quality. The image is slightly oversegmented, which is usually advised.5 Building the training samples is the more time-consuming process. It requires 6  h to generate and to validate the 549 training points. However, sample updating to match the Pléiades data took <2  h.

Four algorithms classified the segmented objects. We reported that SVMRadial and RF outperform significantly the k-NN and Rpart classifiers in terms of classification accuracy. This confirms the suitability of these widely used algorithms for remote sensing applications.26,27 We also found that RF outperforms SVMRadial when multisource data are used in the classification. This is in agreement with the concluding remark in Ref. 26. However, no general conclusion can be drawn here as we only tested the radial SVM kernel. Zhang et al.59 showed that other SVM kernels, such as RBF and polynomial, could provide higher accuracies than RF. Li et al.60 showed that the performance of the classifiers is strongly related to the input datasets. In Ref. 59, k-NN was, for example, reported as being as powerful as SVM or RF.

The implemented classification strategy allows the use of voting schemes. In agreement with Ref. 54, we demonstrated the added value of such voting schemes with the SWV vote. This vote showed a more robust performance in general than individual classifiers. Future work could be carried out on the v.class.mIR GRASS GIS add-on to include other classifiers, most notably other SVM kernels,61 and other voting schemes/combination strategies/classifier ensembles.28,54,62

Ancillary vector data integration is easy in the OBIA-automated approach, but more developments are needed to match the results achieved in the OBIA-rule-based one. These include the use of contextual rules, such as the postprocessing or the updating of the ancillary vector data. Flexibility regarding the input datasets is an asset of the OBIA-automated approach.

Second, our OBIA-rule-based approach classifies EO data according to a set of expert-defined rules. These rules efficiently use information derived from EO, ancillary vector, contextual, and geometrical characteristics of objects to iteratively classify and reclassify objects until their final class attribution. We confirm the findings by Chen et al.17 and Dinis et al.21 regarding the mapping possibilities offered using NDWI, bNDVI, and SSI. These spectral indices prove efficient for distinguishing water bodies from shadows, asphalt surfaces from bare soils, and deciduous from coniferous forests. However, confusions remain between water bodies and some shadows in dense urban areas. Specific methods, such as presented by Yuan and Sarma,63 should be explored to avoid such confusions.

We also confirm the benefits of using spectral and height information for urban LC mapping.15 Removing either the spectral or height above ground information will strongly impact the mapping capabilities, such as the number of classes mapped or the resulting classification accuracies. As already demonstrated in Ref. 23, ancillary vector data integration is efficient. The OBIA-rule-based approach also allows an easy update of these databases.

Regarding the transferability of the OBIA-rule-based approach, manual intervention is limited to visual thresholds definition on the four spectral indices used in the ruleset. No training samples are required. Ruleset transferability to other scenes, other periods, or other EO data is not straightforward. Spectral indices behavior and thresholds definition may change between EO data sources and acquisitions. It could indeed be influenced by changes in the period of acquisition that impact, for example, the greenness of vegetation, the shadow contrast level, or the water reflectance values. Data availability could also change. In our case study, if the nDSM information was not available, the development of a completely new ruleset, not relying on height thresholds, would be required. However, we developed this ruleset keeping these limitations in mind. We focused on EO data that are part of Wallonia’s EO acquisition framework that include yearly AOP acquisition and LiDAR data acquisition every 5 years. In the end, our ruleset appears less sophisticated than the ones proposed by O’Neil-Dunne et al.23 and Van De Kerchove et al.56 We strongly believe that this simplification benefits the transferability of the method.

We showed that both OBIA methodologies have pros and cons. Regarding only accuracies and visual quality of the map, it appears to be difficult to recommend one specific approach. However, these are only two preliminary implementations that could be further improved. Additionally, a potential solution is to combine both approaches with first a classification of the LC using trained machine-learning classifiers and then postprocessing using expert-defined rules. This option will be tested in future research.

One of our aims is to provide recommendations toward a suitable method for regional application. This implies analyzing parameters other than just accuracy and map quality. This will be discussed in Sec. 4.3.

4.2.

Added Value of Input Data

We tested the added value of input data in the OBIA-automated approach by comparing the results using VNIR data and by adding (1) NDVI, (2) nDSM, (3) by comparing the AOP and the Pléiades data, and (4) by integrating ancillary vector data.

Initially, we intended to use the NDVI only in the classification. However, the first tests showed that not including the NDVI in the segmentation process generates loss of information for vegetated features. It was then decided to include the NDVI in the segmentation. However, despite a segmentation that looks better according to visual assessment, NDVI integration did not significantly improve the accuracy of the final LC map.

Despite the very low density of points (1 to 3  pts/m2) used to generate the LiDAR nDSM, its addition significantly improved the accuracies of the LC map, which is in agreement with Ref. 15. A new acquisition of LiDAR data is currently under negotiation by Walloon authorities. First, elements indicate a planned 7- to 8-pts/m2 resolution. However, the Walloon EO data acquisition framework will not include yearly coverage of LiDAR data. An acquisition for every 5 years is being discussed. Therefore, as an alternative, we showed that the nDSM generated from AOP provides equivalent results to those obtained with the low-density LiDAR. Producing yearly LC maps benefiting from the added value brought about by the nDSM is thus possible in Wallonia.

We demonstrated that the 0.25-m AOP provides higher accuracies than the 0.5-m Pléiades. The spatial and spectral resolutions and the quality of the data in link with the characteristics of urban features are two parameters that can explain this result. Jensen and Cowen64 suggested that the minimum spatial resolution requirement should be one-half the diameter of the smallest object of interest. According to that criterion and to the 15-m2 MMU mapping goal, both EO data seem suitable. However, both of them use pan-sharpening to derive fine-scale XS data. Pan-sharpening often results in spectral distortions. Spectral quality of lower resolution XS data is thus not fully preserved in the fused data. Currently, no satellite sensor can provide a native spatial resolution inferior to 0.5 to 1 m for XS data. Worldview-3 (and soon 4) offers 0.31-m PAN and 1.24-m XS bands.65 The lower positional accuracy of Pléiades data is also a source of uncertainty, especially when used in a multisource approach.14 The improvement of Pléiades’ positional accuracy is currently under discussion with the Pléiades data provider.

We finally assessed the contribution of ancillary vector data integration to the classification. These datasets mostly contribute to a better delineation of objects. A dedicated object-based accuracy assessment could be performed in future research to better assess the quality of the generated objects for cartographic purposes. We also stated the common interest of maps obtained with or without the addition of ancillary vector data for dedicated applications, such as urban green monitoring.

4.3.

Application to the Walloon Region

Operational mapping of the LC at a regional scale, such as Wallonia (16,844  km2), generates method- and data-related challenges.

Regarding the methods, automation seems mandatory. Each approach, including a manual step, limits the automation, i.e., thresholding in the OBIA-rule-based one and training in the OBIA-automated approach. On the one hand, thresholding requires similar acquisition dates between the images acquired over the region, which is not realistic in Wallonia. Indeed EO data acquisitions have always been spread across time. For instance, the spectral contrast visible at the bottom of Fig. 1 is due to distinct acquisition periods and thus differing vegetation and illumination status. Thresholds could be defined by processing the data separately according to acquisition date or vegetation status. On the other hand, training samples need to be representative of the LC classes for the whole region and for each acquisition. New solutions for automated training, such as active learning methods,66 or the use of open access training databases/frameworks, such as OpenStreetMap, Land Use/Cover Area frame Statistical Survey, or GeoWiki, should be envisaged.

Active learning66,67 allows the user to begin with only a few training samples. The algorithm then identifies those unlabeled objects that would most improve the classification and asks the user to label them. This leads to higher classification accuracies with lower numbers of training samples. Very recently, a module implementing the core of active learning was uploaded to the GRASS add-ons repository.68 We will test the integration of this technique into the processing chain in future works.

Classifying the LC on a large area with submeter multisensor and multisource data poses the challenge of the volume of data to be processed.23 The AOP data volume for the whole Walloon region is more than 1.5 TB. The Pléiades coverage is lighter than the aerial one with around 60 GB of data. The LiDAR nDSM is more than 50 GB. Enhanced storage capabilities, as well as tiling and automation of the mapping scheme using parallel processing, are required for improved efficiency.23 Tiling allows limiting temporary memory requirements. Tools for tiling are already available using eCognition Server or GRASS GIS. However, for increasing time efficiency, an implementation of parallel processing is envisaged in future research. Currently, the OBIA-rule-based approach maps the LC at the rate of 2  min/km2 on our personal computer. Such a rate means that it would take 23 days to cover the whole of Wallonia, which is problematic. Given the 20- to 30-min/km2 rate of the OBIA-automated approach, it would take a year to map the LC of Wallonia using EO data at a spatial resolution of 0.25 m.

Several elements of the toolchain already use parallel processing, e.g., the automatic segmentation parameter optimization in i.segment.uspo and the cross validation in v.class.mIR, but the greatest bottleneck is the segmentation step. In future developments, we will, therefore, test the possibility of segmenting tiles in parallel and then combining the resulting objects before classification. An important issue in this approach, however, is the possible border effect. Recent works have attempted to overcome it using irregular cutlines.69,70 In future work, we will test these approaches and assess the overall balance of gain of speed versus possible loss of accuracy at the tile borders.

Another option would be to lower the spatial resolution. We carried out tests by resampling the AOP data from 0.25 to 0.5, 1, and 2 m (see Sec. 2.3.5 for computer’s specs). The results show that the processing times for applying the processing chain to Wallonia would take 34 days at 0.5-m spatial resolution compared with 360 days at 0.25 m. At 1 and 2 m, the processing time would be further reduced to 11 and 2 days, respectively. Resampling the data to a lower spatial resolution is, therefore, an alternative solution if parallel processing implementation fails to reduce processing times at 0.25 m. The related loss in mapping accuracies and visual quality of the map using lower spatial resolution EO data will then have to be investigated.

Regarding the recommendations for the mapping environment, the open-source solutions have an obvious cost advantage over proprietary software, such as eCognition. The open-source environment also allows the community to propose frequent amendments to the existing codes and even to develop new modules. Proprietary software falls short with respect to research needs in terms of freedoms of use, modification, and distribution of the coded algorithms.71 However, they often offer proven processing capabilities and, currently, their use is more common within the administration than that of the open-source software. The preference toward one or the other mapping environment will be discussed with the public authorities.

Recommending either AOP or submeter satellite data for mapping the LC is not easy. Data availability, property rights, and acquisition costs should be taken into account in addition to mapping quality and processing capabilities associated with the volume of data to be processed.

First, the Walloon authorities have agreed on an annual acquisition of fully processed AOP. This annual acquisition framework is restricted by high sensitivity of optical airborne and spaceborne sensors to weather conditions. As an example, the 2016 AOP coverage of Wallonia was supposed to be fully acquired during spring. This was not achieved until October due to bad weather conditions. The same argument is applicable to satellite data. As an example, the probability of clear sky has been estimated over the Netherlands, which has a similar climate to Belgium, to only 20%.72 As a consequence, data acquisitions to cover large-scale territories are spread across multiple weeks, months, or years with changing LC, such as varying vegetation and illumination conditions.

Second, property rights of satellite images are always owned by the provider while they are fully transferred to the end-user for aerial images.

Third, budgetary constraints, although not the most important, have to be accounted by the administrations for planning their EO data acquisition framework. The current yearly coverage of AOP costs about 12 € per km2, which represents around 200,000 € to cover Wallonia. The cost of the LiDAR acquisition is similar. Acquisition prices for very high-resolution satellite imagery are not publicly available. However, the usual prices range from 3 to 5 to 30 € per km2. Therefore, submeter satellite imagery is not always competitive price-wise. Furthermore, as our acquisition of Pléiades data demonstrated, the framework to obtain preprocessed satellite data is not straightforward and is more costly. However, the waiting time to get access to the data is shorter for satellite imagery as the aerial coverage is usually made available around 6 months to 1 year after the data acquisition in Wallonia.

Future work includes discussing the present results with the Walloon authorities to identify the suitable approach and then further developing the selected classification approach to guarantee its operational functionality and applicability to Wallonia. Also, the present map will be refined to comply with the INSPIRE Pure Land-Cover Component specifications.73 Finally, the LC map will be combined with ancillary socio-economic and thematic geodata to produce an LU map. Both maps will be integrated in a single geodatabase and presented to the authorities. This will help Wallonia to comply with the INSPIRE directive.

5.

Conclusions

This research assessed the potential of two object-based approaches for operational regional and fine-scale urban LC mapping. On the one hand, an open-source framework was intensively tested. The flexibility in terms of developments and in terms of multisource data integration is a strong advantage of this approach and offers great perspectives for its future improvements. On the other hand, a rule-based classification approach was developed in eCognition. The expert knowledge required for defining hierarchical classification rules combined with the straightforward ancillary vector data integration proved efficient for mapping the urban LC using remotely sensed data and the derived indices. We showed that both approaches have advantages and disadvantages. A viable solution could consist of using them successively: mapping LC using machine-learning classifiers and performing postprocessing using expert-defined rules in an open-source framework. This approach will be tested in future research. Looking for an operational framework that is easily reproducible in time, the challenge still resides in limiting the processing time and improving the automation. We then highlighted the needs of parallel processing for dealing with submeter resolution EO data and the automation of the sampling scheme.

Another purpose of this paper was to identify the best set of existing geodata to map LC. Our results show that combining spectral and tridimensional information significantly improves the classification results while integrating ancillary vector data contributes to a better delineation of LC features. Discussing the results, we also show the comparative advantage of aerial images versus submeter satellite imagery not only in terms of classification accuracy but also in terms of costs, preprocessing, data availability, and property rights. Tridimensional information can be extracted either from the processing of AOP or from LiDAR points clouds. As the annual or biannual acquisition of AOP is more and more common in European regions, our approach could be useful for all regions that are looking for up-to-date LC data to provide an updated LC map to INSPIRE.

The use of submeter EO and ancillary vector data for annual fine-scale urban LC mapping is a viable solution. When the numerous challenges related to the regional application of the approaches are met, the new map will provide a holistic vision of the fast-changing urban environment, which will contribute to sustainable spatial and environmental management, at regional and city scales.

Acknowledgments

The research pertaining to these results received financial aid from (i) Institut Scientifique de Services Public Moerman’s fund and (ii) Belgian Science Policy Office (Belspo) in the frame of the STEREO III program—project SmartPop No. SR/00/313. Service Public de Wallonie is acknowledged for providing AOP, LiDAR, and ancillary vector geodatabases (LIC 160128-1348, all rights reserved to SPW). The Pléiades data were obtained due to Belspo/ASTRIUM collaboration agreement; Pléiades data are copyrighted under the mention “© CNES (2013) et Astrium Services/Spot Image (2016), tous droits réservés. Usage commercial interdit.” The BelMap building database was provided by GIM (all rights reserved to GIM). The authors thank all the members of the SmartPop scientific and steering committees for their feedback on the research and on the paper. Benjamin Beaumont developed the OBIA-rule-based method, tested the OBIA-automated approach on the Liège area, performed the literature review, and wrote the paper. Tais Grippa developed the OBIA-automated method and contributed to the sampling, segmentation, and classification literature review. Moritz Lennert developed the multiple GRASS GIS add-ons and contributed to the technical developments. Sabine Vanhuysse provided support for the OBIA implementation and reviewed the paper. Eléonore Wolff designed, coordinated, and reviewed the research and the paper. Nathalie Stephenne coordinated the SmartPop project. The authors declare no conflict of interest.

References

1. 

J. Cihlar, “Land cover mapping of large areas from satellites: status and research priorities,” Int. J. Remote Sens., 21 (6–7), 1093 –1114 (2000). http://dx.doi.org/10.1080/014311600210092 IJSEDK 0143-1161 Google Scholar

2. 

C. Gómez, J. C. White and M. A. Wulder, “Optical remotely sensed time series data for land cover classification: a review,” ISPRS J. Photogramm. Remote Sens., 116 55 –72 (2016). http://dx.doi.org/10.1016/j.isprsjprs.2016.03.008 IRSEE9 0924-2716 Google Scholar

3. 

S. Du et al., “Quantifying the impact of impervious surface location on flood peak discharge in urban areas,” Nat. Hazards, 76 (3), 1457 –1471 (2015). http://dx.doi.org/10.1007/s11069-014-1463-2 NAHZEL 0921-030X Google Scholar

4. 

P. S. Kaspersen et al., “Influence of urban land cover changes and climate change for the exposure of European cities to flooding during high-intensity precipitation,” Proc. Int. Assoc. Hydrol. Sci., 370 21 –27 (2015). http://dx.doi.org/10.5194/piahs-370-21-2015 Google Scholar

5. 

P. Gamba et al., “Urban climate zone detection and discrimination using object-based analysis of VHR scenes,” in Proc. 4th GEOBIA, 70 –74 (2012). Google Scholar

6. 

T. Zupancic, C. Westmacott and M. Bulthuis, The Impact of Green Space on Heat and Air Pollution in Urban Communities: A Meta-Narrative Systematic Review, 1 –68 David Suzuki Foundation, Vancouver, British Columbia, Canada (2015). Google Scholar

7. 

IWEPS and SPW, “WalStat: Le portail d’informations statistiques locales sur la Wallonie,” (2015) http://walstat.iweps.be/walstat-accueil.php October 2016). Google Scholar

8. 

A. Puissant and C. Weber, “The utility of very high spatial resolution images to identify urban objects,” Geocarto Int., 17 (1), 33 –44 (2002). http://dx.doi.org/10.1080/10106040208542223 Google Scholar

9. 

R. Mathieu, J. Aryal and A. K. Chong, “Object-based classification of Ikonos imagery for mapping large-scale vegetation communities in urban areas,” Sensors, 7 (11), 2860 –2880 (2007). http://dx.doi.org/10.3390/s7112860 SNSRES 0746-9462 Google Scholar

10. 

T. Blaschke, “Object based image analysis for remote sensing,” ISPRS J. Photogramm. Remote Sens., 65 (1), 2 –16 (2010). http://dx.doi.org/10.1016/j.isprsjprs.2009.06.004 IRSEE9 0924-2716 Google Scholar

11. 

S. W. Myint et al., “Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery,” Remote Sens. Environ., 115 (5), 1145 –1161 (2011). http://dx.doi.org/10.1016/j.rse.2010.12.017 Google Scholar

12. 

P. M. Dare, “Shadow analysis in high-resolution satellite imagery of urban areas,” Photogramm. Eng. Remote Sens., 71 169 –177 (2005). http://dx.doi.org/10.14358/PERS.71.2.169 Google Scholar

13. 

W. Zhou et al., “Object-based land cover classification of shaded areas in high spatial resolution imagery of urban areas: a comparison study,” Remote Sens. Environ., 113 (8), 1769 –1777 (2009). http://dx.doi.org/10.1016/j.rse.2009.04.007 Google Scholar

14. 

P. Gamba et al., “Limits and challenges of optical high-resolution satellite remote sensing for urban applications,” Urban Remote Sensing: Monitoring, Synthesis and Modeling in the Urban Environment, 36 –47 Wiley-Blackwell, Hoboken, New Jersey (2011). Google Scholar

15. 

W. Y. Yan, A. Shaker and N. El-Ashmawy, “Urban land cover classification using airborne LiDAR data: a review,” Remote Sens. Environ., 158 295 –310 (2015). http://dx.doi.org/10.1016/j.rse.2014.11.001 Google Scholar

16. 

M. Huang et al., “A knowledge-based approach to urban feature classification using aerial imagery with lidar data,” Photogramm. Eng. Remote Sens., 74 (12), 1473 –1485 (2008). http://dx.doi.org/10.14358/PERS.74.12.1473 Google Scholar

17. 

Y. Chen et al., “Hierarchical object oriented classification using very high resolution imagery and LIDAR data over urban areas,” Adv. Space Res., 43 (7), 1101 –1110 (2009). http://dx.doi.org/10.1016/j.asr.2008.11.008 ASRSDW 0273-1177 Google Scholar

18. 

K. A. Hartfield, K. I. Landau and W. J. D. van Leeuwen, “Fusion of high resolution aerial multispectral and lidar data: land cover in the context of urban mosquito habitat,” Remote Sens., 3 (11), 2364 –2383 (2011). http://dx.doi.org/10.3390/rs3112364 Google Scholar

19. 

C. Aubrecht et al., “Integrating earth observation and GIScience for high resolution spatial and functional modeling of urban land use,” Comput. Environ. Urban Syst., 33 (1), 15 –25 (2009). http://dx.doi.org/10.1016/j.compenvurbsys.2008.09.007 CEUSD5 Google Scholar

20. 

A. P. Carleer and E. Wolff, “Change detection for updates of vector database through region-based classification of VHR satellite data,” Proc. SPIE, 6749 674911 (2007). http://dx.doi.org/10.1117/12.737910 PSISDG 0277-786X Google Scholar

21. 

J. Dinis et al., “Hierarchical object-based classification of dense urban areas by integrating high spatial resolution satellite images and lidar elevation data,” in The Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 6 (2010). Google Scholar

22. 

A. Gressin, “Mise à jour d’une base de données d’occupation du sol à grande échelle en milieux naturels à partir d’une image satellite THR,” (2014). Google Scholar

23. 

J. P. M. O’Neil-Dunne et al., “An object-based system for LiDAR data fusion and feature extraction,” Geocarto Int., 28 (3), 227 –242 (2013). http://dx.doi.org/10.1080/10106049.2012.689015 Google Scholar

24. 

A. P. Carleer and E. Wolff, “Urban land cover multi-level region-based classification of VHR data by selecting relevant features,” Int. J. Remote Sens., 27 (6), 1035 –1051 (2006). http://dx.doi.org/10.1080/01431160500297956 IJSEDK 0143-1161 Google Scholar

25. 

M. Belgiu, L. Draguţ and J. Strobl, “Quantitative evaluation of variations in rule-based classifications of land cover in urban neighbourhoods using WorldView-2 imagery,” ISPRS J. Photogramm. Remote Sens., 87 205 –215 (2014). http://dx.doi.org/10.1016/j.isprsjprs.2013.11.007 IRSEE9 0924-2716 Google Scholar

26. 

M. Belgiu and L. Drăgu, “Random forest in remote sensing: a review of applications and future directions,” ISPRS J. Photogramm. Remote Sens., 114 24 –31 (2016). http://dx.doi.org/10.1016/j.isprsjprs.2016.01.011 IRSEE9 0924-2716 Google Scholar

27. 

G. Mountrakis, J. Im and C. Ogole, “Support vector machines in remote sensing: a review,” ISPRS J. Photogramm. Remote Sens., 66 (3), 247 –259 (2011). http://dx.doi.org/10.1016/j.isprsjprs.2010.11.001 IRSEE9 0924-2716 Google Scholar

28. 

P. Du et al., “Remote sensing image interpretation for urban environment analysis: methods, system and examples,” Remote Sens., 6 (10), 9458 –9474 (2014). http://dx.doi.org/10.3390/rs6109458 Google Scholar

29. 

G. M. Espindola et al., “Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation,” Int. J. Remote Sens., 27 (14), 3035 –3040 (2006). http://dx.doi.org/10.1080/01431160600617194 IJSEDK 0143-1161 Google Scholar

30. 

L. Draguţ et al., “Automated parameterisation for multi-scale image segmentation on multiple layers,” ISPRS J. Photogramm. Remote Sens., 88 119 –127 (2014). http://dx.doi.org/10.1016/j.isprsjprs.2013.11.018 IRSEE9 0924-2716 Google Scholar

31. 

B. Johnson et al., “Image segmentation parameter optimization considering within- and between-segment heterogeneity at multiple scale levels: test case for mapping residential areas using landsat imagery,” ISPRS Int. J. Geo-Inf., 4 (4), 2292 –2305 (2015). http://dx.doi.org/10.3390/ijgi4042292 Google Scholar

32. 

T. Grippa et al., “An open-source semi-automated processing chain for urban object-based classification,” Remote Sens., 9 (4), 358 (2017). http://dx.doi.org/10.3390/rs9040358 Google Scholar

33. 

SPW/DGO3 (DGARNE), “La Carte d’Occupation du Sol de Wallonie (COSW),” (2015) http://www.wallonie.be/fr/outils/carte-doccupation-des-sols-de-wallonie April 2016). Google Scholar

34. 

Belgium.be, “Un aperçu chiffré de la population,” (2016) http://www.belgium.be/fr/la_belgique/connaitre_le_pays/Population November 2016). Google Scholar

35. 

H. Claessens et al., “Intégration d'un monitoring des secteurs de cours d'eau dans le cadre des programme d'actions sur les rivièes par une approche intégréée et sectorisée (PARIS),” 1 –90 (2015). Google Scholar

36. 

SPW - SG - DGM - DGEO, “Projet informatique de cartographie continue,” (2016) http://geoportail.wallonie.be/catalogue/b795de68-726c-4bdf-a62a-a42686aa5b6f.html December 2016). Google Scholar

37. 

A. Brenning, “Spatial cross-validation and bootstrap for the assessment of prediction rules in remote sensing: the R package sperrorest,” in IEEE Int. Geoscience and Remote Sensing Symp. (IGARSS’12), 5372 –5375 (2012). http://dx.doi.org/10.1109/IGARSS.2012.6352393 Google Scholar

38. 

R. G. Congalton, “A review of assessing the accuracy of classifications of remotely sensed data,” Remote Sens. Environ., 37 (1), 35 –46 (1991). http://dx.doi.org/10.1016/0034-4257(91)90048-B Google Scholar

39. 

G. M. Foody, “Classification accuracy comparison: hypothesis tests and the use of confidence intervals in evaluations of difference, equivalence and non-inferiority,” Remote Sens. Environ., 113 (8), 1658 –1663 (2009). http://dx.doi.org/10.1016/j.rse.2009.03.014 Google Scholar

40. 

T. Grippa et al., “An open-source semi-automated processing chain for urban object-based classification,” Remote Sens., 9 (4), 358 (2017). http://dx.doi.org/10.3390/rs9040358 Google Scholar

41. 

T. Grippa et al., “GitHub repository for the open-source semi-automated processing chain for urban OBIA classification,” (2017) https://github.com/tgrippa/Opensource_OBIA_processing_chain June ). 2017). Google Scholar

42. 

Jr. J. W. Rouse et al., “Monitoring the vernal advancements and retrogradation (green wave effect) of natural vegetation,” –371 Greenbelt, Maryland (1974). Google Scholar

43. 

S. K. McFeeters, “Using the normalized difference water index (NDWI) within a geographic information system to detect swimming pools for mosquito abatement: a practical approach,” Remote Sens., 5 (7), 3544 –3561 (2013). http://dx.doi.org/10.3390/rs5073544 Google Scholar

44. 

G. Antoniol, “r.texture command geographic resources analysis support system (GRASS) software, version 7.3,” (2017) https://grass.osgeo.org/grass73/manuals/r.texture.html February 2017). Google Scholar

45. 

R. M. Haralick, K. Shanmugam and I. Dinstein, “Textural features for image classification,” IEEE Trans. Syst. Man Cybern., SMC-3 (6), 610 –621 (1973). http://dx.doi.org/10.1109/TSMC.1973.4309314 ISYMAW 0018-9472 Google Scholar

46. 

L. Lefèvre, “Addon r.object.activelearning. Geographic resources analysis support system (GRASS) software, version 7.2,” (2017). Google Scholar

47. 

A. Carleer, O. Debeir and E. Wolff, “Assessment of very high spatial resolution satellite image segmentations,” Photogramm. Eng., 71 (11), 1285 –1294 (2005). http://dx.doi.org/10.14358/PERS.71.11.1285 Google Scholar

48. 

M. Belgiu and L. Drǎguţ, “Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery,” ISPRS J. Photogramm. Remote Sens., 96 67 –75 (2014). http://dx.doi.org/10.1016/j.isprsjprs.2014.07.002 IRSEE9 0924-2716 Google Scholar

49. 

O. Csillik, “Fast segmentation and classification of very high resolution remote sensing data using,” Remote Sens., 9 (243), 19 (2017). http://dx.doi.org/10.3390/rs9030243 Google Scholar

50. 

E. Momsen and M. Metz, “Module i.segment. Geographic resources analysis support system (GRASS) software, version 7.0,” (2015) https://grass.osgeo.org/grass73/manuals/i.segment.html June 2016). Google Scholar

51. 

M. Lennert, “Addon i.segment.stats. Geographic resources analysis support system (GRASS) software, version 7.0,” (2016) https://grass.osgeo.org/grass70/manuals/addons/i.segment.stats.html June 2016). Google Scholar

52. 

M. Metz and M. Lennert, “Addon r.object.geometry. Geographic resources analysis support system (GRASS) software, version 7.3,” (2016) https://grass.osgeo.org/grass72/manuals/addons/r.object.geometry.html February 2017). Google Scholar

53. 

M. Lennert, “Addon v.class.mlR. Geographic resources analysis support system (GRASS) software, version 7.0,” (2016) https://grass.osgeo.org/grass70/manuals/addons/v.class.mlR.html June 2016). Google Scholar

54. 

F. Moreno-Seco et al., “Comparison of classifier fusion methods for classification in pattern recognition tasks,” Lect. Notes Comput. Sci., 4109 705 –713 (2006). http://dx.doi.org/10.1007/11815921 LNCSD9 0302-9743 Google Scholar

55. 

T. Wen, “r.kappa,” (2014) https://grass.osgeo.org/grass64/manuals/r.kappa.html December 2016). Google Scholar

56. 

R. Van De Kerchove et al., “Cartographie des surfaces imperméables en Wallonie,” 1 –39 (2015). Google Scholar

57. 

B. Yu et al., “Automated derivation of urban building density information using airborne LiDAR data and object-based method,” Landscapes Urban Plann., 98 (3–4), 210 –219 (2010). http://dx.doi.org/10.1016/j.landurbplan.2010.08.004 LUPLEZ 0169-2046 Google Scholar

58. 

T. Grippa et al., “An open-source semi-automated processing chain for urban OBIA classification,” in Proc. of the GEOBIA 2016 Conf.: Mach-2: Machine Learning and Automation II, 1 –6 (2016). Google Scholar

59. 

C. Zhang, D. Selch and H. Cooper, “A framework to combine three remotely sensed data sources for vegetation mapping in the central Florida Everglades,” Wetlands, 36 (2), 201 –213 (2016). http://dx.doi.org/10.1007/s13157-015-0730-7 Google Scholar

60. 

L. Li, Y. Wu and M. Ye, “Experimental comparisons of multi-class classifiers,” Informatica, 39 (1), 71 –85 (2015). INFREN 0892-3876 Google Scholar

61. 

M. Pal, “Kernel methods in remote sensing: a review,” ISH J. Hydraul. Eng., 15 (Suppl. 1), 194 –215 (2009). http://dx.doi.org/10.1080/09715010.2009.10514975 Google Scholar

62. 

P. Du et al., “Multiple classifier system for remote sensing image classification: a review,” Sensors, 12 (4), 4764 –4792 (2012). http://dx.doi.org/10.3390/s120404764 SNSRES 0746-9462 Google Scholar

63. 

X. Yuan and V. Sarma, “Automatic urban water-body detection and segmentation from sparse ALSM data via spatially constrained model-driven clustering,” IEEE Geosci. Remote Sens. Lett., 8 (1), 73 –77 (2011). http://dx.doi.org/10.1109/LGRS.2010.2051533 Google Scholar

64. 

J. R. Jensen, D. C. Cowen, “Remote sensing of urban/suburban infrastructure and socio-economic attributes,” The Map Reader: Theories of Mapping Practice and Cartographic Representation, 153 –163 John Wiley & Sons, Ltd., Chichester, United Kingdom (2011). http://dx.doi.org/10.1002/9780470979587.ch22 Google Scholar

65. 

Satellite Imaging Corporation, “Satellite sensors,” (2016) http://www.satimagingcorp.com/satellite-sensors/ December 2016). Google Scholar

66. 

D. Tuia et al., “Supervised remote sensing image classification,” IEEE J. Sel. Top. Signal Process., 5 (3), 606 –617 (2011). http://dx.doi.org/10.1109/JSTSP.2011.2139193 Google Scholar

67. 

L. Bruzzone and C. Persello, “Active learning for classification of remote sensing images,” in IEEE Int. Geoscience and Remote Sensing Symp. (IGARSS’09), (2009). http://dx.doi.org/10.1109/IGARSS.2009.5417857 Google Scholar

68. 

L. Lefèvre and GRASS Development Team, “Add-on r.object.activelearning,” (2017). https://grass.osgeo.org/grass72/manuals/addons/r.object.activelearning.html Google Scholar

69. 

T. S. Körting, E. F. Castejon, L. M. G. Fonseca, “The divide and segment method for parallel image segmentation,” in Int. Conf. on Advanced Concepts for Intelligent Vision Systems, 504 –515 (2013). http://dx.doi.org/10.1007/978-3-319-02895-8_45 Google Scholar

70. 

A. R. Soares, T. S. Korting and L. M. G. Fonseca, “Improvements of the divide and segment method for parallel image segmentation,” in Proc. XVI GEOINFO, 222 –232 (2015). http://dx.doi.org/10.13140/RG.2.1.1593.7363 Google Scholar

71. 

S. Steiniger and G. J. Hay, “Free and open source geographic information tools for landscape ecology,” Ecol. Inform., 4 (4), 183 –195 (2009). http://dx.doi.org/10.1016/j.ecoinf.2009.07.004 Google Scholar

72. 

T. van der Wal et al., “Fieldcopter: unmanned aerial systems for crop monitoring services,” Precision Agriculture ’13, 169 –175 Wageningen Academic Publishers, Wageningen (2013). Google Scholar

73. 

INSPIRE, “D2.8.II.2 Data Specification on Land cover-Draft Technical Guidelines,” (20132016). http://inspire.ec.europa.eu/documents/Data_Specifications/INSPIRE_DataSpecification_LC_v3.0.pdf Google Scholar

Biography

Benjamin Beaumont is a researcher at the Institut Scientifique de Services Public (ISSeP) and at the Université Libre de Bruxelles (ULB). He received his MSc degree in bioengineering, specialization in science and technology of the environment from the Université Catholique de Louvain, Belgium. His current research interest focuses on exploiting Earth observation (EO) data for deriving useful thematic information, namely, in the field of LCLU mapping, climate reporting, and coal heaps and landfills monitoring.

Tais Grippa is a researcher and a PhD candidate in the Department of Geosciences, Environment and Society (DGES), ULB. He received his MSc degree in human geography in 2006. His research topic is focused on mapping urban LCLU from satellite imagery and intraurban human population densities modeling.

Moritz Lennert is a researcher at DGES, ULB, with a license in geography from ULB. He received his MSc degree in globalization and development from Queen Mary’s. He has coordinated European projects in economic geography, regional, and urban development and has specialized in GIS and remote sensing. He is a member of the GRASS GIS Project Steering Committee and a charter member of OSGeo.

Sabine Vanhuysse is a researcher in DGES, ULB. She has a background in geomatics, and her research focuses on the application of remote sensing and geospatial analysis to different areas, including humanitarian demining and spatial epidemiology. She has been involved in several interdisciplinary international projects, and her present research addresses the variation of malaria risk within and across sub-Saharan cities.

Nathalie Stephenne received a degree in geography, a master’s degree in development studies, and a PhD from UCL, Belgium. She worked as a team leader on EO and GIS topics in private companies (I-mage consult/EUROSENSE). At the Institut de Gestion de l'Environnement et d'Aménagement du Territoire, ULB, she worked on remote sensing projects. Her postdoctoral research at the Joint Research Centre, Italy, was on GIS spatial modeling. She is the leader of the Remote Sensing and Geodata unit at ISSeP.

Eléonore Wolff received a master’s degree in geography from ULB, Belgium, then specialized in cartography and remote sensing, and achieved her PhD from UCL, Belgium. After a short working time at the Belgian National Geographic Institute, she started to teach at ULB. She is currently a full-time professor of geography and leads a research group specialized in very high-resolution object-based image analysis remote sensing at ULB.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Benjamin Beaumont, Tais Grippa, Moritz Lennert, Sabine G. Vanhuysse, Nathalie R. Stephenne, and Eléonore Wolff "Toward an operational framework for fine-scale urban land-cover mapping in Wallonia using submeter remote sensing and ancillary vector data," Journal of Applied Remote Sensing 11(3), 036011 (19 August 2017). https://doi.org/10.1117/1.JRS.11.036011
Received: 15 February 2017; Accepted: 21 July 2017; Published: 19 August 2017
Lens.org Logo
CITATIONS
Cited by 13 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Vegetation

Associative arrays

LIDAR

Buildings

Data acquisition

Visualization

Image segmentation

Back to Top