Translator Disclaimer
6 October 2015 Automatic parsing of lane and road boundaries in challenging traffic scenes
Author Affiliations +
Automatic detection of road boundaries in traffic surveillance imagery can greatly aid subsequent traffic analysis tasks, such as vehicle flow, erratic driving, and stranded vehicles. This paper develops an online technique for identifying the dominant road boundary in video sequences captured by traffic cameras under challenging environmental and lighting conditions, e.g., unlit highways captured at night. The proposed method works in real time of up to 20  frames/s and generates a ranked list of road regions that identify road and lane boundaries. Our method begins by segmenting each frame into a set of superpixels. An adaptive sampling step approximates superpixel contours to a collection of edge segments. Next, we show how online hierarchical clustering can be efficiently used to organize edges into clusters of colinearly similar sets. Promising clusters are paired with each other to form cluster pairs. Then we present and prove a statistical ranking measure that is used along with road-activity and perspective cues to find the dominant road boundaries. We evaluate the proposed approach on two real-world datasets to test our method under camera viewpoint changes and extreme environmental and lighting conditions. Results show that our method outperforms two state-of-the-art techniques in precision, recall, and runtime.
© 2015 SPIE and IS&T 1017-9909/2015/$25.00 © 2015 SPIE and IS&T
Mohamed A. Helala, Faisal Z. Qureshi, and Ken Q. Pu "Automatic parsing of lane and road boundaries in challenging traffic scenes," Journal of Electronic Imaging 24(5), 053020 (6 October 2015).
Published: 6 October 2015


Back to Top