VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition

被引:312
作者
Lee, Seokju [1 ]
Kim, Junsik [1 ]
Yoon, Jae Shin [1 ]
Shin, Seunghak [1 ]
Bailo, Oleksandr [1 ]
Kim, Namil [1 ]
Lee, Tae-Hee [2 ]
Hong, Hyun Seok [2 ]
Han, Seung-Hoon [2 ]
Kweon, In So [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Robot & Comp Vis Lab, Daejeon, South Korea
[2] Samsung Elect DMC R&D Ctr, Seoul, South Korea
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2017年
关键词
BASE-LINE;
D O I
10.1109/ICCV.2017.215
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a unified end-to-end trainable multi-task network that jointly handles lane and road marking detection and recognition that is guided by a vanishing point under adverse weather conditions. We tackle rainy and low illumination conditions, which have not been extensively studied until now due to clear challenges. For example, images taken under rainy days are subject to low illumination, while wet roads cause light reflection and distort the appearance of lane and road markings. At night, color distortion occurs under limited illumination. As a result, no benchmark dataset exists and only a few developed algorithms work under poor weather conditions. To address this shortcoming, we build up a lane and road marking benchmark which consists of about 20,000 images with 17 lane and road marking classes under four different scenarios: no rain, rain, heavy rain, and night. We train and evaluate several versions of the proposed multi-task network and validate the importance of each task. The resulting approach, VPGNet, can detect and classify lanes and road markings, and predict a vanishing point with a single forward pass. Experimental results show that our approach achieves high accuracy and robustness under various conditions in realtime (20 fps). The benchmark and the VPGNet model will be publicly available(1).
引用
收藏
页码:1965 / 1973
页数:9
相关论文
共 34 条
[1]  
Aly M., 2008, IV
[2]  
[Anonymous], IV
[3]  
[Anonymous], ICONIP
[4]  
[Anonymous], CVPR
[5]  
[Anonymous], IV
[6]  
[Anonymous], CVPR WORKSH
[7]  
[Anonymous], 2016, ARXIV160900967
[8]  
[Anonymous], ICPRAM
[9]  
[Anonymous], 2013, IEEE Comput. Soc.
[10]  
[Anonymous], 2013, ARXIV PREPRINT ARXIV