3 December 2012 Extracting roads based on Retinex and improved Canny operator with shape criteria in vague and unevenly illuminated aerial images
Author Affiliations +
J. of Applied Remote Sensing, 6(1), 063610 (2012). doi:10.1117/1.JRS.6.063610
Abstract
An automatic road extraction method for vague aerial images is proposed in this paper. First, a high-resolution but low-contrast image is enhanced by using a Retinex-based algorithm. Then, the enhanced image is segmented with an improved Canny edge detection operator that can automatically threshold the image into a binary edge image. Subsequently, the linear and curved road segments are regulated by the Hough line transform and extracted based on several thresholds of road size and shapes, in which a number of morphological operators are used such as thinning (skeleton), junction detection, and endpoint detection. In experiments, a number of vague aerial images with bad uniformity are selected for testing. Similarity and discontinuation-based algorithms, such as Otsu thresholding, merge and split, edge detection-based algorithms, and the graph-based algorithm are compared with the new method. The experiment and comparison results show that the studied method can enhance vague, low-contrast, and unevenly illuminated color aerial road images; it can detect most road edges with fewer disturb elements and trace roads with good quality. The method in this study is promising.
Ronggui, Weixing, and Sheng: Extracting roads based on Retinex and improved Canny operator with shape criteria in vague and unevenly illuminated aerial images

1.

Introduction

Using aerial or remote-sensing images to obtain Earth surface information is an important way for gathering geographic information. High-resolution images provide accurate information and are convenient for city planning, mapping, military inspecting, change detecting, and geographic information system (GIS) updating. High-resolution color aerial images include innumerable data information, and how to quickly and accurately get special geo-information is becoming more significant. Hence, road extraction from high-resolution color aerial images is one of the hot research topics in the road-recognition research area.

In the world recently, the research topic of road extraction has many new theories and technology innovations. Some institutes and academic departments have accomplished more and more research in this field. Road extraction methods can be divided into semi-automatic and fully automatic. Although fully automatic methods have made some progress, the semi-automatic methods are still most advanced and are widely used, such as Peking University’s road extraction system. But they also need to improve the functionalities for accuracy and automation. Spectrum, grayscale, and texture features have been used in both semi-automatic and fully automatic methods. In high-resolution aerial images or remote-sensing images, road shape features are very important in road tracing.

Thirty years ago, a method for road extraction was proposed for a low-resolution image in Ref. 1. The methods that use line features to extract roads, which only adapt to linear roads, have been proposed in Ref. 2. In a remote-sensing image, texture information is used for road tracing.3 Recently, the level set has been applied for road image segmentation.4 In addition, in road extraction approaches, morphological mathematics was introduced in Ref. 5, and Gauss–Markov theory and support vector machine (SVM) were applied in Ref. 6. Roads can also be traced by road footprints,7 and extraction of a road on its spectral and shape features is possible.8 Some researchers proposed that automatically extracting roads is impossible because of the variety of images and the diversity of roads. Some have thought that road extraction must combine human recognition to guide a computer doing more accurate computation and reorganization;9 the practice has proven that it is definitely possible for automatic identification in some cases. Road detection on multiple information fusion is available for remote-sensing images, as Ref. 10 reported. Using road shape features as a main criterion for road extraction is applied in Ref. 11. During the last 10 years, there has been some advancement in theory, but for practical applications there is a lot of work to do in the development of road traffic.

As described above, each of the algorithms or methods has both advantages and limitations. Because a road is the combination of lines and curves, the road object is a linear object. Edge detection-based road tracing algorithms have advantages for road identification.1213.14.15 Before edge detection, the image might be preprocessed for enhancing roads and smoothing away noise.16,17 Especially for the vague images, image enhancement is needed.1819.20.21 Shape features of road objects play important roles for road extraction in high-resolution images.2223.24 A road is a rock object or rock-like object, its surface is rough, and its environment is complicated, although a road’s color is uniform in most cases. In this study, an automatic road-extraction method is proposed that uses image segmentation algorithms based on image enhancement by using a Retinex operator and benefits from the principle of local gray-value discontinuation and the integration of road shape features. First, a non-uniform illumination road image is enhanced by a Retinex algorithm, then the road image is processed by an improved Canny edge detector. Subsequently, the lines and curves in the image are thinned and evaluated, small-sized or irregular-shaped objects are smoothed away, end points are detected for the remaining objects, and the end points are connected to each other according to their distances and curvature or orientations. Finally, several road shape features are used for judging road segments and regulating roads by combining the edge information and the original image information. In experiments, vague aerial road images were chosen for both low contrast and uneven illumination, and the results prove that the method in this paper is promising for images with stripe-like roads and thin roads.

2.

Road-Tracing Method in Aerial Images

In this section, three typical images are presented, and the road characteristics and image properties are described and analyzed. To see the difficulties of image segmentation, the traditional similarity or discontinuation-based algorithms are applied to the images; in addition, the newly discussed graph-based algorithm is used. From the testing results, all the tested image segmentation methods cannot obtain a satisfactory image segmentation result.

Based on the above image test and description, a new image segmentation method is proposed. The new method includes image enhancement by using the Retinex algorithm, the road edge detection algorithm based on the improved Canny edge detection operator, and post-processing according to a number of thresholds of road size and shape. Detailed descriptions for the algorithms and functions are in follows.

2.1.

Properties of Aerial Images and Characteristics of Roads

The quality of aerial images of roads can be affected by different factors such as camera vibration, weather (fog or sandstorm), and light variation. These factors often give images nonuniform illumination, which creates difficulties for image segmentation and road tracing; therefore, analysis of road characteristics is very important in the first step. In general, the road features in an aerial image can be concluded as follows:

  • (1) A road has a stripe feature; the width of a road does not suddenly vary—the width changes from thin to thick gradually.

  • (2) The surface gray values or colors of a road do not vary much in a certain distance range, but the gray values or colors are very different from those of the neighboring non-road areas such as trees, buildings, and grass.

  • (3) A road has obvious basic edge information, e.g., road edge and road lanes (with white/yellow lines).

  • (4) A road can be divided into different segments thresholded by curvature or length, and the segments can be a line or curved segment.

  • (5) A road has a certain length that is not too short, and a road is an elongated object in which the ratio between length and width is very large.

  • (6) After image segmentation, road segments can be linked and composed into a long road or road network.

  • (7) A long road or road network may appear interrupted due to vehicles, lanes, grass or tree shadows in road sides, or marked road signs or labels on the road surface.

  • (8) Because of uneven illumination or object shadows, the colors or gray values on a road surface vary much, which makes image segmentation and road tracing difficult.

  • (9) Bad weather factors make aerial images vague or fuzzy and blur roads.

To use the above advantages and overcome the disadvantages, the new method for road tracing in an aerial image is to enhance the image by the Retinex algorithm, accomplish image segmentation based on the improved Canny edge detection algorithm, and do a number of post-processing operations for road tracing in a binary image.

Figure 1(a) is an aerial image (resolution 0.14m/pixel) including a main road, a large area of water, several small roads, a building block, several tree blocks and grass blocks, and other areas. The image has low contrast and is very blurry (taken in cloudy weather) with an uneven illumination. By using a simple Otsu thresholding algorithm,25 the thresholding result in Fig. 1(b) is worse, where the interested areas cannot be segmented well. The reason is that the image consists of multiple areas with different colors/grayscales and textures; therefore, the image cannot be separated between roads and background. Graph-based image segmentation algorithms are suitable for images with low contrast between objects and background.2627.28.29 By using a graph-based algorithm, the image segmentation result in Fig. 1(c) is much better than that in Fig. 1(b). Even though the main road is separated from background, the main road is split into several regions that are not easy to merge; in addition, the small roads are missing. The merge and split algorithm and the merge algorithm have been applied to the image. Figure 1(d), by using threshold 30 manually, shows that the merge algorithm can give a good result, but the threshold is difficult to automatically choose.

Fig. 1

Comparison between different image segmentation algorithms for image 1. (a) Original image. (b) Otsu thresholding result. (c) Result of merge algorithm. (d) Result of graph segmentation algorithm.

JARS_6_1_063610_f001.png

The image (resolution 0.16m/pixel) in Fig. 2(a) is another kind of color aerial road image. It was taken in foggy weather, so the image is unclear. Compared to the image in Fig. 1(a), it has multiple roads with different widths and the background of the image is not complicated, it mainly includes green color blocks and white or yellow blocks. It seems to be easy for image segmentation, but the test results show its difficulties in Fig. 2(b) through 2(d). Only a part of the main road is extracted by the Otsu algorithm,25 but the white or yellow blocks (background) are extracted as objects too, and most of the roads are missing or some parts of the roads are connected to the non-road regions.

Fig. 2

Comparison between different image segmentation algorithms for image 2. (a) Original image. (b) Result of Otsu algorithm. (c) Result of graph algorithm. (d) Result of edge-based algorithm.

JARS_6_1_063610_f002.png

Recently, a number of papers have addressed image segmentation by graph-based algorithms.2627.28.29 In this test, by applying the graph-based algorithm, most of the roads are segmented into different parts [Fig. 1(c)], and some of the roads have disappeared. With the Canny edge detection-based algorithm, only a part of the roads are delineated roughly as tested.

Figure 3(a) is a more vague road image (resolution 0.21m/pixel), where several roads are of different colors and widths, and the image quality is worse than in the above two images. The same algorithms as above have been used in the image: the Otsu algorithm extracted only three roads, which are disconnected in some parts; the graph-based algorithm can detect most of the roads, but one road includes several regions, so further region merge routine is needed; and the merge algorithm has a better image segmentation result at the bottom part of the image than at the top. The segmentation result is difficult to use for the further road tracing.

Fig. 3

Comparison between different image segmentation algorithms for image 3. (a) Original image. (b) Result of Otsu algorithm. (c) Result of graph algorithm. (d) Result of merge algorithm.

JARS_6_1_063610_f003.png

As in the above three images, the test results show that these kinds of color aerial road images are unclear and include different textures and colors (grayscales). The similarity-based image segmentation algorithms (e.g., thresholding) cannot be used for the images; the discontinuation-based algorithm has trouble extracting the roads completely; and the more advanced graph-based algorithms can extract only parts of the roads from the complicated background, but the extracted roads are cut into different regions or pieces, which are difficult to merge and trace further. Therefore, new algorithms are needed for vague road image segmentation and road tracing. A new method for road extraction is presented in the following sections.

2.2.

Image Enhancement with Retinex Algorithm

Aerial images suffer from significant losses in visual quality (compared to direct observation of the scene by human eye) when there are either spatial or spectral variations in illumination. The visibility of color and detail in shadows is quite poor for recorded images, and a spectral shift in illumination toward either the blue or the red reduces the overall visibility of scene detail and color. These lighting defects are quite common. Likewise, for scenes with some white surfaces (clouds or snow, for example), visibility of color and detail in the non-white zones of the image is poor. Therefore a general-purpose automatic computation is needed to routinely improve those aerial images. The Retinex algorithms are the principal methods to enhance those images.1819.20.21

The Retinex algorithm was introduced in 1971 by Edwin H. Land, who formulated the Retinex theory to explain it. Many researchers have demonstrated the great dynamic range compression, increased sharpness and color, and accurate scene rendition that are produced by the multiscale Retinex with color restoration, specifically in aerial images under smoke/haze conditions. Overall, Retinex performs automatically and is consistently superior to any of the other methods. Whereas the other methods may work well on occasional cooperative images, it is easy to find images where they perform poorly. Retinex performs well on cases where the other methods are clearly not appropriate. To use Retinex algorithms on the studied images, a single-scale Retinex (SSR) algorithm is extended into a multiscale Retinex (MSR) algorithm by referring to previous research work.1819.20.21

The simple description of an SSR can be expressed as

(1)

Ri(x,y)=logIi(x,y)log[F(x,y)*Ii(x,y)]
and

(2)

Ri(x,y)=log[Ii(x,y)F(x,y)*Ii(x,y)]=log[Ii(x,y)I¯i(x,y)],
where Ii(x,y) is the image in the ith spectral band; Ri(x,y) is Retinex output in the ith spectral band; Gaussian function F(x,y)=Ke(x2+y2)/σ2; K is determined by F(x,y)dxdy=1; and σ is the Gaussian surround space constant.

Because of the tradeoff between dynamic range compression and color rendition, one has to choose a good scale in the formula of F(x,y) in SSR. If one does not want to sacrifice either dynamic range compression or color rendition, the MSR, which is a combination of weighted different scale of SSR, is a good solution:

(3)

RMSCi=n=1NωnRni,
where N is the number of the scales, Rni is the i’th component of the n’th scale, and ωn is ith component’s weight. The obvious question about MSR is the number of scales needed, scale values, and weight values. Experiments have shown that three scales are enough for most of the images, and the weights can be equal. Generally fixed scales of 15, 80, and 250 can be used, or scales of fixed portion of image size can be used. But these are more experimental than theoretical, because the scale of an image is unknown to the real scenes. The weights can be adjusted to weigh more on dynamic range compression or color rendition. Figures 4(a), 5(a), and 6(a) are MSR results of aerial road image examples. They have significant dynamic range compression in the boundary between the lighted parts and dark parts, which is reasonable color rendition in the whole image scale.

Fig. 4

Example of road tracing procedure on image 1 from Fig. 1. (a) Result of Retinex algorithm on Fig. 1(a). (b) Result of merge algorithm on 6(a). (c) Edge detection by new algorithm. (d) After smoothing and noise filtering. (e) End point detection on 6(d). (f) Final road tracing result.

JARS_6_1_063610_f004.png

Fig. 5

Example of road tracing procedure on image 2 from Fig. 2. (a) Result of Retinex algorithm on Fig. 2(a). (b) After edge detection by new algorithm. (c) After smoothing and filtering away noise. (d) Final road tracing result.

JARS_6_1_063610_f005.png

Fig. 6

Example of road tracing procedure on image 3 from Fig. 3(a). (a) Result of Retinex algorithm on Fig. 3(a). (b) Edge detection by new algorithm. (c) After smoothing and filtering away noise. (d), Final road tracing result.

JARS_6_1_063610_f006.png

2.3.

Image Segmentation on Improved Canny Edge Detector

After the above image enhancement by the Retinex algorithm, the roads on the image can be illustrated clearly. Then the image is segmented by using the improved Canny edge detection algorithm,14,15 in which the double thresholds are obtained by applying for maximum information cross-entropy between classes, and finally the object boundaries are traced by a procedure for rough road tracing.

In this study, Bayesian and cross-entropy theories are used for the determination of thresholds on a gradient magnitude image. An image (grayscale or a band in a color image) is divided into two classes, objects (o) and background (b), and an image can be supposed to have two normal distributions, in which the normal distribution parameters can be obtained from the histogram of the original image:

(4)

p(g/i)=12πσiexp[gμi(t)22σi2(t)]i=o,b,
where t is a threshold, μ is mean, σ is variance, and g is a gray value. Variances in both categories are estimated as follows:

(5)

σo2(t)=1Pog=0th(g)[gμo(t)]2;σb2(t)=1Pbg=t+1Lh(g)[gμb(t)]2,
where L is a gray upper bound and h(g) presents gray-level histogram. The a priori probability of object class is Po=g=0th(g), and the a priori probability of background class is Pb=g=t+1Lh(g), so their means within clusters are uo(t)=1Pog=0tgh(g) and ub(t)=1Pbg=t+1Lgh(g), respectively. Then the posterior probability is obtained by using Bayesian probability formula:

(6)

p(i/g)=Pip(g/i)/i=o,bPip(g/i).
The optimal threshold via maximum posterior probability of pixels in the different regions is obtained. Inter-class cross-entropy based on posterior probability of single pixel is

(7)

D(o:b;s)=13[1+p(o|s)]ln1+p(o|s)1+p(b|s)+13[1+p(b|s)]ln1+p(b|s)1+p(o|s).
The difference between classes can be obtained. Replacing pixel grayscale s with gray value g for simplifying the calculation is replacing a probability distribution with a gray-level histogram. It can be rewritten as:

(8)

D(o:b;g)=g=0Th(g)PoD(o:b;g)+g=T+1Lh(g)PbD(o:b;g),
where L is a gray value for upper bound, and T is a gray value threshold.

To obtain the optimal threshold T* based on maximum cross-entropy between classes, one can carry out a searching operation:

(9)

D(o:b;T*)=maxTD(o:b;T).
Based on the above description, the image segmentation procedure using the improved Canny edge detection operator can be briefly illustrated in the following 10 steps. The formal algorithm procedure (eight steps) for one band of a color image or a grayscale image is:

  • (1) Input an original color aerial road image f(x,y).

  • (2) Enhance the image f(x,y) with the Retinex algorithm; the resulting image is presented as g(x,y).

  • (3) Smooth g(x,y) twice by using an adaptive filter,25 and the smoothed image s(x,y) is obtained.

  • (4) Calculate directional derivatives and amplitudes for all the pixels in the image s(x,y), then the directional derivative matrices Dx(x,y), Dy(x,y) and gradient magnitude matrix (image) M(x,y) are obtained.

  • (5) Operate Dx(x,y), Dy(x,y), and M(x,y) with no-maximum suppression.

  • (6) Find out the high threshold Th and low threshold Tl by using the algorithm of maximum cross-entropy between classes and by using Bayesian judgment.

  • (7) Search for edge points on the image M(x,y) by using the thresholds Th and Tl.

  • (8) Output edge image v(x,y).

  • (9) Do the above steps for R, G, and B (three bands in the color image) and output the three bands’ images: vR(x,y), vG(x,y), and vB(x,y), respectively.

  • (10) Produce the binary image B(x,y)=Max[vR(x,y),vG(x,y),vB(x,y)] for the color image.

In aerial road (multiple objects) images or linear object images, the final result is the image of the map of all object boundaries in the segmented image. At this stage, to trace the boundaries of objects, the gaps between edges need to link. This task requires the extraction of information about attributes of endpoints of edges, in particular orientation and neighborhood relationships.

For the step of rough road tracing, after the above edge detection, the edges are thinned into a width of one pixel, but some gaps in the edges prevail and noise is still present in the image. To make smooth the object contours, the Hough line transform is applied for the line objects. For the remaining objects, it is necessary to trace edges (boundaries of objects). To do this, the procedure first detects significant endpoints of curves (or lines). Then, it estimates the directions for each endpoint based on the local directions of the edge pixels. Finally, it traces boundaries according to the information of directions of each new detected pixel (new endpoint) and an intensity cost function. The edge tracing starts from the detected endpoints to see which neighborhood has the highest gray value, and when a new pixel is found as edge point, it is used as a new endpoint. If the end point cannot be found out or suitable connecting points are lacking, the threshold values are changed until a new endpoint is determined.

Before it starts to trace object boundary from another detected endpoint, the tracing procedure continues until an object boundary is fully traced. When there is no detected endpoint for continuous tracing, the edge tracing procedure stops.

2.4.

Postprocessing of Road Tracing on Road Shape and Size Features

After the above image edge detection and rough road tracing, most of the roads are extracted, but some of non-road objects are still in the image and partial roads are not fully traced. To overcome these two problems, post-processing for road tracing is needed based on road shape and size information. The method using shape features to extract road is proposed in Refs. 2224, and it fits to a larger road network in some cases; in most cases, however, the features used by the methods are too simple and not easy to adapt to different roads with different shapes. In certain situations, the procedure loses more road segment information, so it is hard to fit to real road situations. To overcome these problems, a new post-processing method based on road information is presented as follows.

In this study, the following features of an object in a binary image can be presented as different parameters based on the best-fit rectangle,22 and the feature information can be used to trace roads and remove non-road objects.

Object area (A): After edge detection and rough road tracing, most of the endpoints are connected to form relatively large linear objects. The value of a road area is not too small, so one can use area threshold to filter out some small area objects. Define T as the area threshold, and set T by considering the image resolution and the small objects that need to be taken away.

Roundness of object (E):

(10)

E=(P*P)/4πA,
where P is the perimeter of an object. Using E can describe the complexity of an object. The complexity of road shapes describes the length of the object perimeter in per-unit area. The larger the value of E, the longer the perimeter in per-unit area, and the object is more dispersed.

Ratio between length and width (R, elongation of an object):

(11)

R=100*(W/L),
where L and W are the length and the width of the smallest external rectangle of an object, respectively, which are the width and length of the best-fit rectangle of an object as described below.

Fill degree of object (F, irregularity):

(12)

F=100*(A/Arectangle)=100*(A/L*W),
where Arectangle is the area of the smallest external rectangle of an object.

Grade of lineation of object (V):

(13)

V=100*(P/A).
From the above equation, when V is close to 1, the object is one-pixel-width line or curve, and it is more linear.

Ratio between object length and perimeter (Q):

(14)

Q=100*(L/P).
If a road is too rugged, Q will be small.

To calculate the above parameters, reasonable rules are needed. In general, if a method can meet four basic conditions—rotational invariance, elongation, angularity, and simple use—the measurement will be stable and repeatable. Furthermore, the size and shape measurement of an object can be reproducible.

The best-fit rectangle method22 was modified based on the Ferret algorithm, and it solves the problem of rotational invariance by using the least second moment method. Although the best-fit rectangle algorithm is not a simple method, it can meet the conditions of rotational invariance, elongation, and angularity. Synthetically comparing to other existing object size and shape measurement methods, the best-fit rectangle algorithm has obvious advantages. Here, the best-fit rectangle algorithm is used to measure the main size and shape of a road object. The best-fit rectangle approach, the combination of the Ferret method and least second moment minimization, requires calculation of only three moments about the center of gravity, maximum and minimum coordinates in a coordinate system oriented in the direction of the axis of the least second moments, and a simple area ratio. An example is shown in Fig. 7, in which there are 18 objects with different characteristics. The obvious road objects are 1, 2, 3, 5, 7, and 18 [as labeled in Fig. 7(a)], and the others are non-road objects. The best-fit rectangles are marked in Fig. 7(b). The object boundaries can be presented in Fig. 7(c), where each of the objects has a best-fit rectangle. As an example, the above parameters are calculated for each of the objects, and the results are listed in Table 1.

Fig. 7

Road object samples, including 18 objects. Objects 1, 2, 3, 5, 7, and 18 are roads; see Table 1. (a) Labeled road objects. (b) Best-fit rectangles of objects. (c) Labeled road-contour objects. (d) Junction points (white spots) on object middle lines.

JARS_6_1_063610_f007.png

Table 1

Parameters of 18 objects in Fig. 7(a)–7(b).

ObjectAPLWREVFQRoad?
14007465.8180.2933.1618.394.3111.6367.0238.71Yes
21295572.55230.9546.0619.9420.1444.2112.1740.34Yes
3622471.83188.5329.7815.8028.4875.8611.0839.96Yes
41842240.0853.5351.896.772.4913.0366.4322.30No
52976528.3175.786.3149.127.4617.7519.6333.26Yes
6986692.71127.5734.226.8138.7370.2622.6018.42No
7695411.9136.6744.7932.7719.4359.2711.3533.18Yes
81403297.016531.0547.76521.1769.5221.89No
91218.839.032.932.112.35156.9245.8247.96No
1022.832150.000.32141.50100.0070.67No
11615464.2975.8436.0647.5527.8975.4922.4916.34No
1222.832150.000.32141.50100.0070.67No
1357.834.161.9346.490.98156.6062.2853.13No
14710.835.472.0136.791.33154.7163.6750.51No
1511.4111100.000.16141.00100.0070.92No
1633.833133.330.39127.67100.0078.33No
1745.833.211.9560.640.68145.7563.9055.06No
18889482.05215.6514.96.9120.854.2327.6744.74Yes

According to Table 1, when A100, objects with numbers 9–10 and 12–17 are smoothed away; when E6 or E35, objects with numbers 4, 6, and 8 are removed; and when Q25, the object is not a road; therefore, the remaining objects with numbers 1–3, 5, 7, and 18 are road objects.

To add more information for evaluating the detected targets, the object skeletons can be obtained first, then junction points can be obtained by a junction-detection algorithm, and finally the junction points can be used to cut off lines or curves. Relabeling each object can be used to judge if an object is a road [see Fig. 7(d)]. After the above processing, to make sure the road detection is correct, the small lines or curves are removed from Fig. 7(d), then the result can be used for supplementing the final road tracing.

Through all the above features, the road segments can be extracted from a binary image. For getting more accurate road segments, one must choose the appropriate thresholds of the above parameters based on image resolution and object sizes.

3.

Experimental Results

The method consisting of the several sub-algorithms can improve road tracing comprehensively and accurately. The detailed procedure is illustrated in Fig. 8, and the four gray boxes present the main steps.

Fig. 8

Procedure for image segmentation and road tracing in a color aerial road image.

JARS_6_1_063610_f008.png

The sample aerial road images (Figs. 13) are of a high resolution and include a road network or multiple roads; the resolution of images is between 0.14 and 0.16m/pixel; and the images are vague because of bad weather, cameras vibration, etc. The program development platform is VC++2010. The thresholds of road object sizes and shapes are defined according to a large number of experiments with certain levels of resolution and quality. Some thresholds fluctuate in a small scope because of the different resolutions. Some initial images are shown as in Figs. 13, where there are the large differences of the gray/color values among the roads and background, and the images are of low contrast and high vagueness.

In Fig. 4, after image enhancement by Retinex algorithm, the vague image in Fig. 1(a) become very clear, where the main roads, the pool at bottom, the building block on lower left, the large tree area at lower right, and the other tree blocks and grass blocks are obviously shown with real colors.

To avoid texture affects, the enhanced image then is processed by a merge algorithm if needed [Fig. 4(b)]; after that, although textures are flattened and the detailed information of the image is reduced, and the main road is smoother, which might be better for the next step of edge detection. By using the new edge detection algorithm, the edges in the main blocks are detected [Fig. 4(c)], but they include a lot of noise and road gaps. To resolve the problems, the new post-processing method is applied, and the resulting image is shown in Fig. 4(d). To make sure that there is no gap between road lines or curves, endpoint detection is carried out again as shown in Fig. 4(e). After a linking procedure, the final road tracing result is presented in Fig. 4(f).

As described in Sec. 2, the image in Fig. 2(a) is another kind of aerial road image. Compared to the image in Fig. 1(a). Figure 2(a) has a wide-stripe road and about 10 roads with narrow widths, most parts of the background are of green colors, some small parts are of soil-colored blocks, half of the roads are unclear, and the top part in the image is vague.

Figure 5(b) illustrates that the edge image is very clear and all the roads are detected, but they are disconnected (including many gaps). By the procedure illustrated in Figs. 4 and 8, the 10 roads are detected clearly in Fig. 5(b). With endpoint detection and connecting short gaps on roads, the Hough line transform is applied to repair the roads, then non-road objects are taken away by using the rules of road size and shape information as described in Sec. 2. The resulting image in this step, as shown in Fig. 5(c), is much better than that in Fig. 5(b). Finally, the image result [Fig. 5(d)] is obtained, mainly by using road shape criteria to smooth out the irregular shaped objects.

In Fig. 3(a), the image has low contrast and is vague. It includes three main roads of wide stripes, and ten small or thin roads. On the main roads, some trucks and cars and other vehicles are moving. On the roadsides, there are many trees which might affect the road tracing. The new edge-detection result [Fig. 6(b)] shows that there is much noise with different colors, the main stripe-like roads include middle lines, and their edges are disturbed by the trees and the other noise. After noise removal (with small thresholds of road size and shape) and gap closing based on the post-procedure rules, the result is presented in Fig. 6(c). By using Hough line transform, the line roads can be detected; for the densely packed parallel lines, the middle lines are deleted; and the noise or non-road lines are removed. The final resulting image can be obtained as shown in Fig. 6(d).

The above three examples show that the studied method can be satisfactorily used for such aerial road images, but they cannot represent all types of aerial road images, as tested and analyzed. Some notes for discussions follow.

For a vague color image with low contrast and uneven illumination, before the Retinex -based image enhancement processing, some preprocessing might be needed, such as Gaussian smoothing or adaptive smoothing for spot or stripe noise suppression, and the road edges might be enhanced by using sharpening algorithms, such as fractional differential.16 For an image with too much noise, which can be evaluated by edge density,15 to smooth and reduce the number of objects after the Retinex algorithm action, a region merge is needed as shown in Fig. 4(b). As tested in Figs. 13, the graph-based image segmentation algorithm2627.28.29 is good for the aerial road images in some cases, and can supplement road tracing, but the algorithm should be improved to be suitable for the aerial road images.

In the above post-processing procedure and morphological operations, except for the thinning and junction detection and endpoint detection, the other operators such as dilation/erosion and distance map can also be used in binary image processing. For some binary images, the holes and single-line edges (mostly the boundaries between different gray or color blocks) should be eliminated.

4.

Conclusions

The road extraction method based on the Retinex algorithm, improved Canny edge detection algorithm, and post-procedure with morphological mathematics for aerial road images of high resolution and low contrast and uneven illumination is proposed in this paper. The experiments show that this method can get better road extraction results for three different types of images than using the traditional similarity and discontinuation-based algorithms, and even the newly discussed graph-based algorithm. As tested, the conclusion is that if a road surface has more disturb information or large grayscale or color differences, the images can be smoothed and enhanced first; then they can be segmented by a new edge detection algorithm, which can automatically threshold the images into binary edge images. Finally, in the post-processing of binary images, size and shape thresholds for road objects are used, and the thresholds can be determined by the road size and shape with certain image resolutions. The next step is to add the graph-based algorithms into the road tracing method. In the beginning, image evaluation is important to get information for subsequent image processing and parameter determination, so it is worth studying a smart image quality evaluation algorithm by using neural network and fuzzy mathematics. In addition, better morphological methods for linking several road parts into a road network will be further studied. Finally, some more performance criteria for road tracing should be further considered scientifically, reasonably, and in unification.

Acknowledgments

This research is financially supported by the National Natural Science Fund in China (Grant No. 61170147), by Xi’an City Science & Technology Fund with no. CX1252(8) in China, and by Special Fund for Basic Scientific Research of Central Colleges (Innovation team), at Chang’an University of the Education ministry in China (Grant No. 2011xx26).

References

1. 

M. A. FischlerJ. M. TenenbaumH. C. Wolf, “Detection of roads and linear structures in low-resolution aerial imagery using a multisource knowledge integration technique,” Comput. Graph. Image Process. 15(3), 201–223 (1981), http://dx.doi.org/10.1016/0146-664X(81)90056-3.CGIPBG0146-664XGoogle Scholar

2. 

J. YangR. Wang, “Scene perception and classified deletion for roads in remote sensing images,” J. Comput. Aided Des. Comput. Graph. 19(3), 334–339 (2007).Google Scholar

3. 

A. CordS. Chambon, “Automatic road defect detection by textural pattern recognition based on AdaBoost,” Comput. Aided Civil Infrastr. Eng. 27(4), 244–259 (2012), http://dx.doi.org/10.1111/mice.2012.27.issue-4.1093-9687Google Scholar

4. 

M. Rajeswariet al., “Automatic road extraction based on level set, normalized cuts and mean shift methods,” Int. J. Comput. Sci. Iss. 8(3-2) 250–257 (2011).1694-0814Google Scholar

5. 

C. Zhuet al., “Road extraction from high-resolution remotely sensed image based on morphological segmentation,” Acta Geodaet. Cartograph. Sin. 33(4) (2004).Google Scholar

6. 

M. WangJ. LuoD. Ming, “Transportation centers extraction from high spatial resolution remote sensed imagery,” Comput. Eng. Appl. 23, 20–23 (2004).Google Scholar

7. 

J. Huet al., “Road network extraction and intersection detection from aerial images by tracking road footprints,” IEEE Trans. Geosci. Rem. Sens. 45(12), 4144–4157 (2007), http://dx.doi.org/10.1109/TGRS.2007.906107.IGRSD20196-2892Google Scholar

8. 

Q. LuoQ. YinD. Kuang, “Research on extraction road based on its spectral feature and shape feature,” Rem. Sens. Technol. Appl. 22(2), 339–343 (2007).Google Scholar

9. 

M. YanX. Lei, “Deriving city road from high resolution satellite image Ikonos,” Rem. Sens. Technol. Appl. 19(2), 85–89 (2004).Google Scholar

10. 

X. Liet al., “Road extraction from high-resolution remote sensing images base on multiple information fusion,” Acta Geodaet. Cartograph. Sin. 37(2), 178–184 (2008).Google Scholar

11. 

C. WiedemmannC. HeipkeH. Mayer, “Empirical evaluation of automatically extracted road axes,” in Proc. CVPR Workshop Empirical Eval. Methods Comput. Vis, pp. 172–187, John Wiley & Sons, Los Alamitos, California (1998).Google Scholar

12. 

W. WangF. BergholmB. Yang, “Froth delineation based on image classification,” Miner. Eng. 16(11), 1183–1192 (2003), http://dx.doi.org/10.1016/j.mineng.2003.07.014.Google Scholar

13. 

S. Yiet al., “A shearlet approach to edge analysis and detection,” IEEE Trans. Image Process. 16(11), 929–941 (2009), http://dx.doi.org/10.1109/TIP.2009.2013082.IIPRE41057-7149Google Scholar

14. 

S. BerlemontJ.-C. Olivo-Marin, “Combining local filtering and multiscale analysis for edge, ridge, and curvilinear objects detection,” IEEE Trans. Image Processing 19(1), 74–84 (2010), http://dx.doi.org/10.1109/TIP.2009.2030968.IIPRE41057-7149Google Scholar

15. 

W. Wang, “Fragment size estimation without image segmentation,” Imag. Sci. J. 56(2), 91–96 (2008), http://dx.doi.org/10.1179/174313108X268312.ISCJFK1368-2199Google Scholar

16. 

W. X. WangW. S. LiX. Yu, “Fractional differential algorithms for rock fracture images,” Imag. Sci. J. 60(2), 103–111 (2012), http://dx.doi.org/10.1179/174313112X13197110618234.ISCJFK1368-2199Google Scholar

17. 

W. WangD. Luo, “Algorithm for image automatic registration on Harris-Laplace feature,” J. Appl. Remote Sens. 3(1), 033554 (2009), http://dx.doi.org/10.1117/1.3256135.1931-3195Google Scholar

18. 

J. M. MorelA. B. PetroC. Sbert, “A PDE formalization of Retinex theory,” IEEE Trans. Image Process. 19(11), 2825–2837 (2010), http://dx.doi.org/10.1109/TIP.2010.2049239.IIPRE41057-7149Google Scholar

19. 

A. ChandraB. AcharyaM. I. Khan, “Retinex image processing: improving the visual realism of color images,” Int. J. Info. Technol. Knowl. Manag. 4(2), 371–377 (2011).Google Scholar

20. 

Z. RahmanD. J. JobsonG. A. Woodell, “Investigating the relationship between image enhancement and image compression in the context of the multi-scale Retinex,” J. Vis. Commun. Image Rep. 22(3), 237–250 (2011), http://dx.doi.org/10.1016/j.jvcir.2010.12.006.JVCRE71047-3203Google Scholar

21. 

Ø. KolasI. FarupA. Rizzi, “Spatio-temporal Retinex-inspired envelope with stochastic sampling: a framework for spatial color algorithms,” J. Imag. Sci. Technol. 55(4), 040503 (2011), http://dx.doi.org/10.2352/J.ImagingSci.Technol.2011.55.4.040503.JIMTE61062-3701Google Scholar

22. 

W. Wang, “Image analysis of particles by modified Ferret method: best-fit rectangle,” Powder Technol. 165(1), 1–10 (2006), http://dx.doi.org/10.1016/j.powtec.2006.03.017.POTEBX0032-5910Google Scholar

23. 

W. X. Wang, “Image analysis of aggregates,” Comput. Geosci. 25(1), 71–81 (1999), http://dx.doi.org/10.1016/S0098-3004(98)00109-5.CGEODT0098-3004Google Scholar

24. 

W. X. Wang, “Binary image segmentation of aggregates based on polygonal approximation and classification of concavities,” Pattern Recogn. 31(10), 1503–1524 (1998), http://dx.doi.org/10.1016/S0031-3203(97)00145-3.PTNRA80031-3203Google Scholar

25. 

W. Wang, “Colony image acquisition system and segmentation algorithms,” Opt. Eng. 50(12), 123001 (2011), http://dx.doi.org/10.1117/1.3662398.OPEGAR0091-3286Google Scholar

26. 

V. GopalakrishnanY. HuD. Rajan, “Random walks on graphs for salient object detection in images,” IEEE Trans. Image Process. 19(12), 3232–3242 (2010), http://dx.doi.org/10.1109/TIP.2010.2053940.IIPRE41057-7149Google Scholar

27. 

M. B. SalahA. MiticheI. B. Ayed, “Multiregion image segmentation by parametric kernel graph cuts,” IEEE Trans. Image Process. 20(2), 545–557 (2011), http://dx.doi.org/10.1109/TIP.2010.2066982.IIPRE41057-7149Google Scholar

28. 

D. ShiL. ZhengJ. Liu, “Advanced Hough transform using a multilayer fractional Fourier method,” IEEE Trans. Image Process. 19(6), 1558–1566 (2010), http://dx.doi.org/10.1109/TIP.2010.2042102.IIPRE41057-7149Google Scholar

29. 

B. Pengaet al., “Image segmentation by iterated region merging with localized graph cuts,” Pattern Recogn. 44(10–11), 2527–2538 (2011), http://dx.doi.org/10.1016/j.patcog.2011.03.024.PTNRA80031-3203Google Scholar

Biography

JARS_6_1_063610_d001.png

Ma Ronggui professor in optical engineering. He obtained his PhD degree in 2008 at Chang’an Univerisity in China, and his interests involve optical engineering, information processing for road traffic, image processing and analysis, computer vision. In last 20 years, he developed a number real time systems in industry.

JARS_6_1_063610_d002.png

Wang Weixing professor in information engineering. He obtained his PhD degree in 1997 at Royal Instituet of Technology in Sweden, since 2001, he has been a PhD supervisor at Royal Instituet of Technology, now he is a visiting professor at Chang’an University, and his interests involve information engineering, image processing and analysis, pattern recognition and computer vision.

JARS_6_1_063610_d003.png

Liu Sheng PhD student at School of Information Engineering, Chang’an University, China. Her interests involve image processing and analysis, pattern recognition and computer vision.

Ronggui Ma, Weixing Wang, Sheng Liu, "Extracting roads based on Retinex and improved Canny operator with shape criteria in vague and unevenly illuminated aerial images," Journal of Applied Remote Sensing 6(1), 063610 (3 December 2012). http://dx.doi.org/10.1117/1.JRS.6.063610
JOURNAL ARTICLE
14 PAGES


SHARE
KEYWORDS
Roads

Image segmentation

Image processing algorithms and systems

Detection and tracking algorithms

Edge detection

Image enhancement

Binary data

Back to Top