LF-Net: A Learning-Based Frenet Planning Approach for Urban Autonomous Driving

被引:3
|
作者
Yu, Zihan [1 ]
Zhu, Meixin [2 ]
Chen, Kehua [3 ]
Chu, Xiaowen [4 ]
Wang, Xuesong [5 ]
机构
[1] Hong Kong Univ Sci & Technol, Syst Hub, Guangzhou 510230, Peoples R China
[2] Hong Kong Univ Sci & Technol, Guangdong Prov Key Lab Integrated Commun Sensing, Syst Hub, Guangzhou 510230, Peoples R China
[3] Hong Kong Univ Sci & Technol, Div Emerging Interdisciplinary Areas EMIA, Interdisciplinary Programs Off, Hong Kong, Peoples R China
[4] HongKong Univ Sci & Technol, Informat Hub, Guangzhou 510230, Peoples R China
[5] Tongji Univ, Shanghai 200070, Peoples R China
来源
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES | 2024年 / 9卷 / 01期
基金
中国国家自然科学基金;
关键词
Planning; Trajectory; Behavioral sciences; Autonomous vehicles; Transformers; Task analysis; Roads; Autonomous driving; deep learning; frenet planning; motion planning; transformer; urban driving; AVOIDANCE;
D O I
10.1109/TIV.2023.3332885
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning-based approaches hold great potential for autonomous urban driving motion planning. Compared to traditional rule-based methods, they offer greater flexibility in planning safe and human-like trajectories based on human driver demonstration data and diverse traffic scenarios. Frenet planning is widely applied in autonomous driving motion planning due to its simple representation of self-driving vehicle information. However, it is challenging to select proper terminal states and generate human-like trajectories. To address this issue, we propose a learning-based Frenet planning network (LF-Net) that learns a policy to sample and select the most human-like terminal states and then generate safe trajectories. The LF-Net includes 1) a Transformer-based sub-network that encodes environmental and vehicle interaction features, 2) a classification and scoring sub-network based on cross-attention mechanisms that captures the relationship between potential terminal states and environmental features to generate the optimal terminal states set, and 3) a trajectory generator based on LQR that fits a trajectory between the selected terminal state and the initial state. Experimental results on the real-world large-scale Lyft dataset demonstrate that the proposed method can plan safe and human-like driving behavior, and performs better than baseline methods. The detailed implementation would be available at https://github.com/zyu494/LF-Net.
引用
收藏
页码:1175 / 1188
页数:14
相关论文
共 50 条
  • [1] Autonomous Driving on Curvy Roads Without Reliance on Frenet Frame: A Cartesian-Based Trajectory Planning Method
    Li, Bai
    Ouyang, Yakun
    Li, Li
    Zhang, Youmin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) : 15729 - 15741
  • [2] Parallel Learning-Based Steering Control for Autonomous Driving
    Tian, Fangyin
    Li, Zhiheng
    Wang, Fei-Yue
    Li, Li
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (01): : 379 - 389
  • [3] Learning-based algorithms with application to urban scene autonomous driving
    Zhang, Shuwei
    Wu, Yutian
    Wang, Yichen
    Dong, Yifei
    Ogai, Harutoshi
    Tateno, Shigeyuki
    ARTIFICIAL LIFE AND ROBOTICS, 2023, 28 (01) : 244 - 252
  • [4] Learning-based algorithms with application to urban scene autonomous driving
    Shuwei Zhang
    Yutian Wu
    Yichen Wang
    Yifei Dong
    Harutoshi Ogai
    Shigeyuki Tateno
    Artificial Life and Robotics, 2023, 28 : 244 - 252
  • [5] Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses
    Deng, Yao
    Zhang, Tiehua
    Lou, Guannan
    Zheng, Xi
    Jin, Jiong
    Han, Qing-Long
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (12) : 7897 - 7912
  • [6] Deep Learning-Based Vehicle Behavior Prediction for Autonomous Driving Applications: A Review
    Mozaffari, Sajjad
    Al-Jarrah, Omar Y.
    Dianati, Mehrdad
    Jennings, Paul
    Mouzakitis, Alexandros
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (01) : 33 - 47
  • [7] Reinforcement-Learning-Based Trajectory Learning in Frenet Frame for Autonomous Driving
    Yoon, Sangho
    Kwon, Youngjoon
    Ryu, Jaesung
    Kim, Sungkwan
    Choi, Sungwoo
    Lee, Kyungjae
    APPLIED SCIENCES-BASEL, 2024, 14 (16):
  • [8] Trajectory Planning for Autonomous Driving in Unstructured Scenarios Based on Deep Learning and Quadratic Optimization
    Li, Han
    Chen, Peng
    Yu, Guizhen
    Zhou, Bin
    Li, Yiming
    Liao, Yaping
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (04) : 4886 - 4903
  • [9] Recent advances in reinforcement learning-based autonomous driving behavior planning: A survey
    Wu, Jingda
    Huang, Chao
    Huang, Hailong
    Lv, Chen
    Wang, Yuntong
    Wang, Fei-Yue
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 164
  • [10] A Reinforcement Learning-Based Adaptive Path Tracking Approach for Autonomous Driving
    Shan, Yunxiao
    Zheng, Boli
    Chen, Longsheng
    Chen, Long
    Chen, De
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (10) : 10581 - 10595