Equipping With Cognition: Interactive Motion Planning Using Metacognitive-Attribution Inspired Reinforcement Learning for Autonomous Vehicles

被引:1
|
作者
Hou, Xiaohui [1 ]
Gan, Minggang [1 ,2 ]
Wu, Wei [1 ]
Ji, Yuan [3 ]
Zhao, Shiyue [4 ]
Chen, Jie [5 ]
机构
[1] Beijing Inst Technol, Sch Automat, Natl Key Lab Autonomous Intelligent Unmanned Syst, Beijing 100081, Peoples R China
[2] Minzu Univ China, Sch Informat Engn, Beijing 100086, Peoples R China
[3] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Nanyang 639798, Singapore
[4] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[5] Harbin Inst Technol, Natl Key Lab Autonomous Intelligent Unmanned Syst, Harbin 150006, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous vehicles; Planning; Vehicle dynamics; Heuristic algorithms; Adaptation models; Reinforcement learning; Psychology; Vehicles; Electronic mail; Cognitive processes; Interactive motion planning; reinforcement learning; autonomous vehicles; attribution theory; metacognitive theory; PREDICTION;
D O I
10.1109/TITS.2024.3520514
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
This study introduces the Metacognitive-Attribution Inspired Reinforcement Learning (MAIRL) approach, designed to address unprotected interactive left turns at intersections-one of the most challenging tasks in autonomous driving. By integrating the Metacognitive Theory and Attribution Theory from the psychology field with reinforcement learning, this study enriches the learning mechanisms of autonomous vehicles with human cognitive processes. Specifically, it applies Metacognitive Theory's three core elements-Metacognitive Knowledge, Metacognitive Monitoring, and Metacognitive Reflection-to enhance the control framework's capabilities in skill differentiation, real-time assessment, and adaptive learning for interactive motion planning. Furthermore, inspired by Attribution Theory, it decomposes the reward system in RL algorithms into three components: 1) skill improvement, 2) existing ability, and 3) environmental stochasticity. This framework emulates human learning and behavior adjustment, incorporating a deeper cognitive emulation into reinforcement algorithms to foster a unified cognitive structure and control strategy. Contrastive tests conducted in various intersection scenarios with differing traffic densities demonstrated the superior performance of the proposed controller, which outperformed baseline algorithms in success rates and had lower collision and timeout incidents. This interdisciplinary approach not only enhances the understanding and applicability of RL algorithms but also represents a meaningful step towards modeling advanced human cognitive processes in the field of autonomous driving.
引用
收藏
页码:4178 / 4191
页数:14
相关论文
共 50 条
  • [41] An End-to-End Deep Reinforcement Learning Method for UAV Autonomous Motion Planning
    Cui, Yangjie
    Dong, Xin
    Li, Daochun
    Tu, Zhan
    2022 7TH INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION ENGINEERING, ICRAE, 2022, : 100 - 104
  • [42] Route Planning for Autonomous Mobile Robots Using a Reinforcement Learning Algorithm
    Talaat, Fatma M. M.
    Ibrahim, Abdelhameed
    El-Kenawy, El-Sayed M.
    Abdelhamid, Abdelaziz M. A.
    Alhussan, Amel Ali
    Khafaga, Doaa Sami
    Salem, Dina Ahmed
    ACTUATORS, 2023, 12 (01)
  • [43] A behavior-based scheme using reinforcement learning for autonomous underwater vehicles
    Carreras, M
    Yuh, J
    Batlle, J
    Ridao, P
    IEEE JOURNAL OF OCEANIC ENGINEERING, 2005, 30 (02) : 416 - 427
  • [44] Safe Multiagent Motion Planning Under Uncertainty for Drones Using Filtered Reinforcement Learning
    Safaoui, Sleiman
    Vinod, Abraham P.
    Chakrabarty, Ankush
    Quirynen, Rien
    Yoshikawa, Nobuyuki
    Di Cairano, Stefano
    IEEE TRANSACTIONS ON ROBOTICS, 2024, 40 (2529-2542) : 2529 - 2542
  • [45] Object Detection with Deep Neural Networks for Reinforcement Learning in the Task of Autonomous Vehicles Path Planning at the Intersection
    Yudin, D. A.
    Skrynnik, A.
    Krishtopik, A.
    Belkin, I
    Panov, A., I
    OPTICAL MEMORY AND NEURAL NETWORKS, 2019, 28 (04) : 283 - 295
  • [46] Developing inverse motion planning technique for autonomous vehicles using integral nonlinear constraints
    Diachuk, Maksym
    Easa, Said M.
    FUNDAMENTAL RESEARCH, 2024, 4 (05): : 1047 - 1062
  • [47] Path Planning Based on Deep Reinforcement Learning for Autonomous Underwater Vehicles Under Ocean Current Disturbance
    Chu, Zhenzhong
    Wang, Fulun
    Lei, Tingjun
    Luo, Chaomin
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (01): : 108 - 120
  • [48] Object Detection with Deep Neural Networks for Reinforcement Learning in the Task of Autonomous Vehicles Path Planning at the Intersection
    D. A. Yudin
    A. Skrynnik
    A. Krishtopik
    I. Belkin
    A. I. Panov
    Optical Memory and Neural Networks, 2019, 28 : 283 - 295
  • [49] Optimal Motion Planning in Unknown Workspaces Using Integral Reinforcement Learning
    Rousseas, Panagiotis
    Bechlioulis, Charalampos P.
    Kyriakopoulos, Kostas J.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) : 6926 - 6933
  • [50] Merging planning in dense traffic scenarios using interactive safe reinforcement learning
    Hou, Xiaohui
    Gan, Minggang
    Wu, Wei
    Wang, Chenyu
    Ji, Yuan
    Zhao, Shiyue
    KNOWLEDGE-BASED SYSTEMS, 2024, 290