RIANet++: Road Graph and Image Attention Networks for Robust Urban Autonomous Driving Under Road Changes

被引:1
作者
Ha, Taeoh [1 ,2 ]
Oh, Jeongwoo [1 ,2 ]
Lee, Gunmin [1 ,2 ]
Heo, Jaeseok [1 ,2 ]
Kim, Do Hyung [3 ]
Park, Byungkyu [3 ]
Lee, Chang-Gun [3 ]
Oh, Songhwai [1 ,2 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Seoul 08826, South Korea
[2] Seoul Natl Univ, ASRI, Seoul 08826, South Korea
[3] Seoul Natl Univ, Dept Comp Sci & Engn, Seoul 08826, South Korea
关键词
Autonomous vehicle navigation; imitation learning; sensor fusion; vision-based navigation;
D O I
10.1109/LRA.2023.3320491
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The structure of roads plays an important role in designing autonomous driving algorithms. We propose a novel road graph based driving framework, named RIANet++. The proposed framework considers the road structural scene context by incorporating both graphical features of the road and visual information through the attention mechanism. Also, the proposed framework can deal with the performance degradation problem, caused by road changes and corresponding road graph data unreliability. For this purpose, we suggest a road change detection module which can filter out unreliable road graph data by evaluating the similarity between the camera image and the query road graph. In this letter, we suggest two types of detection methods, semantic matching and graph matching. The semantic matching (resp., graph matching) method computes the similarity score by transforming the road graph data (resp., camera data) into the semantic image domain (resp., road graph domain). In experiments, we test the proposed method in two driving environments: the CARLA simulator and the FMTC real-world environment. The experiment results demonstrate that the proposed driving framework outperforms other baselines and operates robustly under road changes.
引用
收藏
页码:7815 / 7822
页数:8
相关论文
empty
未找到相关数据