Open Access
1 October 2005 Multiscale corner detection of contour images using wavelet transform
Xinting Gao, Farook Sattar, Venkarteswarlu Ronda, Azhar Quddus
Author Affiliations +
Abstract
A new multiscale corner detection method is proposed based on dyadic wavelet transform (WT) of the orientation function of a contour image. As the decomposition of the dyadic WT is complete and its scales are sparse, all the scales are defined as natural scales for corner detection. The points that are wavelet transform modulus maxima (WTMM) at different scales are taken as corner candidates. For each corner candidate, the sum of the corresponding normalized WTMM at all the natural scales is used as significance measure of the "cornerness". The utilization of the complete information makes the performance of the proposed detector independent to the type of input images. The decomposition scales of the WT are restricted by the contour length, which makes the algorithm adaptable for both long contours and short contours. Both subjective and objective evaluation illustrate better performance of the proposed corner detector compared to the conventional methods.

1.

Introduction

Since corner detection has a lot of applications in shape representation and image analysis, a number of methods have been proposed to detect corners of different sizes and extents for contour images. In general, they can be categorized into two groups, the support-region-based methods1 and the multiscale-based methods.2, 3 For the support-region-based methods, for example, Teh-Chin’s algorithm,1 the support region of each point (its natural scale) is determined at individual point. As a result, the natural scale is optimal for the corresponding point. However, the determination of the support region is based on the raw data, which include quantization errors and possible noises. Thus, the methods are not robust. For the multiscale-based methods, there exist scale-space-based analysis and WT-based analysis. Both of them provide a simple hierarchical framework for corner detection and are robust due to their inherent smoothing property. But there exist some problems for the existing multiscale-based methods. For the scale-space-based methods, they are computational inefficient because they need to compute at continuous scales. Furthermore, they either utilize information on one or several heuristically selected natural scales4 or utilize only location information in the transformed domain.2 Without a priori information, no particular scales should be presupposed.5 Thus, the performance of the existing scale-space-based methods is not satisfactory for different types of test images. Recently, Quddus and Gabbouj3 proposed a fast and robust wavelet-based corner detection technique. The technique requires to compute singular value decomposition (SVD) of the dyadic WT to estimate natural scales for the contour. However, in a few cases the stop criterion for the selection of natural scales does not work. Moreover, there is some computational overhead to compute SVD.

To overcome the above problems, we propose a new contour corner detection method using dyadic WT. Although the decomposition scales of the dyadic WT are sparse, the decomposition is complete. All the possible decomposition scales of the dyadic WT are considered in this paper as natural scales for corner detection because the natural scales are the ones containing most or all the important information. For each candidate, the sum of the normalized WTMM from different scales is taken as the significance measure to differentiate the corners from the noise. The inherent smoothing and localization properties of WT make this method effective and accurate. In addition, the technique is fast due to the fast implementations of the dyadic WT computations.

The paper is organized as follows. In Sec. 2, the proposed algorithm is presented. Section 3 shows simulation results and performance evaluation. The conclusion is given in Sec. 4.

2.

Proposed Corner Detection Algorithm Using Dyadic WT

Corners are defined as high curvature points on a contour. To estimate the curvature, we select the dyadic WT using the quadratic spline mother wavelet6 to decompose the orientation function because it satisfies the following necessary conditions and good properties. First, the dyadic WT is shift invariant, which is a necessary condition for feature extraction. Second, the quadratic spline mother wavelet has one vanishing moment, which is a first order differential operator on a smoothed signal. Accordingly, the curvature is approximated when the transformation is applied on the orientation function. Third, the dyadic WT is complete. Thus, it provides the decomposition at a sparse set of appropriate scales, which simplifies the following analysis and computation. Last, it has a fast implementation algorithm, which makes the proposed algorithm computationally efficient.

In the proposed algorithm, the preprocessing steps described in Ref. 3 are adopted to get the orientation function of the contour image. Then dyadic WT is applied to the orientation function to estimate the curvature at all possible scales because no scale should be preferred without a priori information.

The range of the decomposition scales, 2j , for the WT is determined by the inherent property of the dyadic WT and the length of the signal, N ,6

Eq. 1

1<2jN,j=1,2,,J,
where J is the maximum level of the WT. According to Eq. 1, the decomposition scales of the WT are restricted by the signal length N , which makes the algorithm adaptable for both long contours and short contours.

Subsequently, all the decomposition scales are taken as natural scales for the corner detection. The WTMM are extracted and the points with WTMM are taken as corner candidates. Modulus maximum is any point whose absolute value is more than one of its neighborhoods and not less than the other neighborhood,6 i.e.,

Eq. 2

{Wf(u0,s)>Wf(u1,s):u1isoneneighborhoodofu0,Wf(u0,s)Wf(u2,s):u2istheotherneighborhoodofu0,}
where Wf represent the modulus of Wf .

At a certain scale, the candidates with acute angles produce large WTMM, while the candidates with obtuse angles have small WTMM. To make each scale contributes the same to the final significance measure, the values of the WTMM are normalized with the respective maximum value at each scale.

Then, the values at different scales corresponding to a particular corner candidate are constructed as a WTMM sequence along the scales.

To detect corners of different subtended angles and sizes, the normalized values of each candidate at all the natural scales are used to compute the measurement as

Eq. 3

Ŵf=1Jj=1JW2jf,
where Ŵf denotes the average and W2jf denotes the normalized wavelet coefficients of the discrete orientation profile f at the scale 2j for each corner candidate. This average includes the information considered at all the natural scales. It also suppresses the noise while strengthening the corner points.

Finally, the corners are detected by setting a predefined threshold.

The steps of the proposed algorithm are presented as follows.

  • 1 Preprocessing steps that have been described in Ref. 3:

    • (a) Track the boundary and obtain the chain code of it;

    • (b) Compute the orientation profile.

  • 2 Determine the natural scales according to the length of the contour image.

  • 3 Compute the WT of the orientation profile at the natural scales.

  • 4 Detect the corner candidates by selecting the WTMM. Normalize the values at each scale.

  • 5 Construct the WTMM sequence for each corner candidate along the scales.

  • 6 Compute the average of the normalized values of WTMM from all natural scales as the significance measure.

  • 7 Normalize the significance measure to the maximum one, thus all the measure values are in the range [0, 1]. Then suppress the false corner points by setting a threshold.

3.

Simulation Results and Performance Evaluation

3.1.

Subjective Evaluation

The simulation results of the proposed method are shown in Fig. 1 and Fig. 2 . The fixed threshold, which is set empirically, is 0.08 for all the simulations shown in this paper. The lengths of the curves in Figs. 1, 1, 1, 1 are 45, 60, 102, and 120, respectively. The lengths of the curves in Figs. 2, 2, 2, 2 are 563, 854, 872, and 1104, respectively.

Fig. 1

Results of the proposed method. The corners are indicated by squares and connected into polygons. (a) The “figure-8” curve, (b) the “chromosome” curve, (c) the “semicir” curve, (d) the “leaf” curve.

040502_1_1.jpg

Fig. 2

Results of the proposed method. The corners are indicated by asterisks.

040502_1_2.jpg

The test images in Fig. 1 are quite short in length, and consequently, the proposed method detects fewer corners compared to the results in Fig. 2. From the simulation results, we see that the proposed method provides satisfactory performance for both long and short contours. This feature makes it more suitable in practical applications.

3.2.

Objective Comparison Using Rosin’s Method

The quantitative measurements using Rosin’s evaluation method7 are shown in Table 1 for the test images in Fig. 1. Merit2 and Merit measures the compression ratio as well as the error measurements, for which the integral square error E2 and the maximum deviation error E are adopted, respectively. For Merit2 and Merit , the larger the values, the better the performance.

Table 1

Quantitative results of the test curves in Fig. 1 using E2 and E∞ , respectively.

Test CurvesAlgorithm Merit2 Merit∞
“figure-8”Proposed method86.8488.55
Teh-Chin’s method146.049.8
Rattarangsi-Chin’s method274.067.6
Quddus-Gabbouj’s method3
“chromosome”Proposed method10096.16
Teh-Chin’s method161.884.2
Rattarangsi-Chin’s method258.571.9
Quddus-Gabbouj’s method387.692.6
“semicir”Proposed method35.463.9
Teh-Chin’s method144.960.0
Rattarangsi-Chin’s method257.763.7
Quddus-Gabbouj’s method333.456.7
“leaf”Proposed method86.2793.82
Teh-Chin’s method150.159.3
Rattarangsi-Chin’s method268.977.8
Quddus-Gabbouj’s method377.377.8

From the results, we see that performance of all the selected methods is good. Among them, the proposed method provides better performance in general. Teh-Chin’s method is based on the support region determination. This method is classical but not so effective compared to the other three methods as shown in Table 1. Rattarangsi-Chin’s method2 is based on the scale-space analysis. It shows better performance measurement than the proposed method for the “semicir” curve using E2 . But the proposed method obtains better results for all the other measures. Quddus-Gabbouj’s method3 is based on WT and SVD. However, in a few cases the stop criterion for the selection of natural scale does not work, e.g., for the “figure-8” curve. Moreover, there is some computational overhead to compute the SVD. The length of the “figure-8” curve is 45pixels . It is relatively short, which might be the possible reason that Quddus-Gabbouj’s method fails.

4.

Conclusions

In this paper, we propose a new corner detector for the contour images based on dyadic WT. The maximum decomposition level of the dyadic WT is imposed by the contour length, which makes the algorithm suitable for both long and short contours. Unlike the existing methods, the information at all dyadic scales have been used for detection, which makes the proposed method independent to the type of contour images. This method is also computationally efficient due to the fast implementation of the dyadic WT. The objective evaluation reveals improved performance by the proposed method compared to the classical methods.

Acknowledgment

The authors are very thankful to Dr. Paul L. Rosin for providing the source code of his evaluation method and useful discussions. The authors would like to thank the reviewers and editors for their valuable suggestions to improve the paper.

references

1. 

C.-H. Teh and R. T. Chin, “On the detection of dominant points on digital curves,” IEEE Trans. Pattern Anal. Mach. Intell., 11 859 –872 (1989). https://doi.org/10.1109/34.31447 Google Scholar

2. 

A. Rattarangsi and R. T. Chin, “Scale-based detection of corners of planar curves,” IEEE Trans. Pattern Anal. Mach. Intell., 14 430 –449 (1992). https://doi.org/10.1109/34.126805 Google Scholar

3. 

A. Quddus and M. Gabbouj, “Wavelet-based corner detection technique using optimal scale,” Pattern Recogn. Lett., 23 215 –220 (2002). Google Scholar

4. 

F. Mokhtarian and R. Suomela, “Robust corner detection through curvature scale space,” IEEE Trans. Pattern Anal. Mach. Intell., 20 1376 –1381 (1998). https://doi.org/10.1109/34.735812 Google Scholar

5. 

T. Lindeberg, Scale-Space Theory in Computer Vision, Kluwer Academic Publishers, Norwell, MA (1994). Google Scholar

6. 

S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, New York (1999). Google Scholar

7. 

P. L. Rosin, “Techniques for assessing polygonal approximations of curves,” IEEE Trans. Pattern Anal. Mach. Intell., 19 659 –666 (1997). https://doi.org/10.1109/34.601253 Google Scholar
©(2005) Society of Photo-Optical Instrumentation Engineers (SPIE)
Xinting Gao, Farook Sattar, Venkarteswarlu Ronda, and Azhar Quddus "Multiscale corner detection of contour images using wavelet transform," Journal of Electronic Imaging 14(4), 040502 (1 October 2005). https://doi.org/10.1117/1.2076968
Published: 1 October 2005
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Corner detection

Wavelet transforms

Sensors

Detection and tracking algorithms

Wavelets

Device simulation

Computer engineering

RELATED CONTENT

Wavelet-based two-dimensional corner detection
Proceedings of SPIE (March 09 1999)
Corner detection using spline wavelets
Proceedings of SPIE (February 01 1992)
Wavelet CFAR detector for automatic target detection
Proceedings of SPIE (August 24 1999)

Back to Top