Equipping With Cognition: Interactive Motion Planning Using Metacognitive-Attribution Inspired Reinforcement Learning for Autonomous Vehicles

被引:1
|
作者
Hou, Xiaohui [1 ]
Gan, Minggang [1 ,2 ]
Wu, Wei [1 ]
Ji, Yuan [3 ]
Zhao, Shiyue [4 ]
Chen, Jie [5 ]
机构
[1] Beijing Inst Technol, Sch Automat, Natl Key Lab Autonomous Intelligent Unmanned Syst, Beijing 100081, Peoples R China
[2] Minzu Univ China, Sch Informat Engn, Beijing 100086, Peoples R China
[3] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Nanyang 639798, Singapore
[4] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[5] Harbin Inst Technol, Natl Key Lab Autonomous Intelligent Unmanned Syst, Harbin 150006, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous vehicles; Planning; Vehicle dynamics; Heuristic algorithms; Adaptation models; Reinforcement learning; Psychology; Vehicles; Electronic mail; Cognitive processes; Interactive motion planning; reinforcement learning; autonomous vehicles; attribution theory; metacognitive theory; PREDICTION;
D O I
10.1109/TITS.2024.3520514
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
This study introduces the Metacognitive-Attribution Inspired Reinforcement Learning (MAIRL) approach, designed to address unprotected interactive left turns at intersections-one of the most challenging tasks in autonomous driving. By integrating the Metacognitive Theory and Attribution Theory from the psychology field with reinforcement learning, this study enriches the learning mechanisms of autonomous vehicles with human cognitive processes. Specifically, it applies Metacognitive Theory's three core elements-Metacognitive Knowledge, Metacognitive Monitoring, and Metacognitive Reflection-to enhance the control framework's capabilities in skill differentiation, real-time assessment, and adaptive learning for interactive motion planning. Furthermore, inspired by Attribution Theory, it decomposes the reward system in RL algorithms into three components: 1) skill improvement, 2) existing ability, and 3) environmental stochasticity. This framework emulates human learning and behavior adjustment, incorporating a deeper cognitive emulation into reinforcement algorithms to foster a unified cognitive structure and control strategy. Contrastive tests conducted in various intersection scenarios with differing traffic densities demonstrated the superior performance of the proposed controller, which outperformed baseline algorithms in success rates and had lower collision and timeout incidents. This interdisciplinary approach not only enhances the understanding and applicability of RL algorithms but also represents a meaningful step towards modeling advanced human cognitive processes in the field of autonomous driving.
引用
收藏
页码:4178 / 4191
页数:14
相关论文
共 50 条
  • [1] Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles
    Aradi, Szilard
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) : 740 - 759
  • [2] Hierarchical Reinforcement Learning for Autonomous Decision Making and Motion Planning of Intelligent Vehicles
    Lu, Yang
    Xu, Xin
    Zhang, Xinglong
    Qian, Lilin
    Zhou, Xing
    IEEE ACCESS, 2020, 8 : 209776 - 209789
  • [3] Hierarchical Motion Planning and Tracking for Autonomous Vehicles Using Global Heuristic Based Potential Field and Reinforcement Learning Based Predictive Control
    Du, Guodong
    Zou, Yuan
    Zhang, Xudong
    Li, Zirui
    Liu, Qi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (08) : 8304 - 8323
  • [4] Predatory-imminence-continuum-inspired graph reinforcement learning for interactive motion planning in dense traffic
    Hou, Xiaohui
    Gan, Minggang
    Wu, Wei
    Zhao, Tiantong
    Chen, Jie
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 277
  • [5] Intelligent Decision Making in Autonomous Vehicles using Cognition Aided Reinforcement Learning
    Rathore, Heena
    Bhadauria, Vikram
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 524 - 529
  • [6] Safe Reinforcement Learning With Stability Guarantee for Motion Planning of Autonomous Vehicles
    Zhang, Lixian
    Zhang, Ruixian
    Wu, Tong
    Weng, Rui
    Han, Minghao
    Zhao, Ye
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5435 - 5444
  • [7] Receding-Horizon Reinforcement Learning Approach for Kinodynamic Motion Planning of Autonomous Vehicles
    Zhang, Xinglong
    Jiang, Yan
    Lu, Yang
    Xu, Xin
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2022, 7 (03): : 556 - 568
  • [8] Predictive trajectory planning for autonomous vehicles at intersections using reinforcement learning
    Zhang, Ethan
    Zhang, Ruixuan
    Masoud, Neda
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2023, 149
  • [9] Risk assessment and interactive motion planning with visual occlusion using graph attention networks and reinforcement learning
    Hou, Xiaohui
    Gan, Minggang
    Wu, Wei
    Zhao, Tiantong
    Chen, Jie
    ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [10] Optimal motion planning by reinforcement learning in autonomous mobile vehicles
    Gomez, M.
    Gonzalez, R. V.
    Martinez-Marin, T.
    Meziat, D.
    Sanchez, S.
    ROBOTICA, 2012, 30 : 159 - 170