Sociality Probe: Game-Theoretic Inverse Reinforcement Learning for Modeling and Quantifying Social Patterns in Driving Interaction

被引:1
作者
Liu, Yiru [1 ,2 ]
Zhao, Xiaocong [1 ,2 ]
Tian, Ye [1 ,2 ]
Sun, Jian [1 ,2 ]
机构
[1] Tongji Univ, Dept Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[2] Tongji Univ, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
关键词
Driving behavior; autonomous driving; social interaction; game theory; inverse reinforcement learning; reward function; CAR-FOLLOWING MODEL; BEHAVIOR;
D O I
10.1109/TITS.2024.3461162
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Autonomous vehicles (AV) are consistently criticized for their inadequacies in harmoniously interacting with human-driven vehicles (HV), primarily attributed to the lack of sociality, a key human trait that balances individual and group rewards. Understanding sociality is essential for smooth AV navigation but remains challenging. To address this, we propose a Game-Theoretic Inverse Reinforcement Learning (GT-IRL) approach to quantify individualized sociality in driving interaction. Our approach identifies the sociality-preference parameters of a pre-designed reward function that integrates the ego agent's rewards and the group rewards shared by all the interacting agents. Instead of presuming an agent is propelled by the maximization of its own rewards, the game-theoretical mechanism is utilized within the IRL structure to capture the fact that human drivers take into account others' interests. We validated our method using human driving data from an unprotected left-turn scenario. The results demonstrate that the proposed GT-IRL outperforms state-of-the-art methods in better reproducing the evolution of the left-turn interaction at both semantic and trajectory levels. Additionally, cross-dataset analysis reveals variations in sociality due to geographical differences (China vs. the U.S.) and the nature of the interacting entities (AV vs. HV or HV vs. HV).
引用
收藏
页码:20841 / 20853
页数:13
相关论文
共 45 条
  • [1] From inverse optimal control to inverse reinforcement learning: A historical review
    Ab Azar, Nematollah
    Shahmansoorian, Aref
    Davoudi, Mohsen
    [J]. ANNUAL REVIEWS IN CONTROL, 2020, 50 : 119 - 138
  • [2] Abbeel P., 2004, P 21 INT C MACH LEAR, P1, DOI DOI 10.1145/1015330.1015430
  • [3] Social LSTM: Human Trajectory Prediction in Crowded Spaces
    Alahi, Alexandre
    Goel, Kratarth
    Ramanathan, Vignesh
    Robicquet, Alexandre
    Li Fei-Fei
    Savarese, Silvio
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 961 - 971
  • [4] Social Ways: Learning Multi-Modal Distributions of Pedestrian Trajectories with GANs
    Amirian, Javad
    Hayet, Jean-Bernard
    Pettre, Julien
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 2964 - 2972
  • [5] A survey of inverse reinforcement learning: Challenges, methods and progress
    Arora, Saurabh
    Doshi, Prashant
    [J]. ARTIFICIAL INTELLIGENCE, 2021, 297 (297)
  • [6] Self-driving cars: A survey
    Badue, Claudine
    Guidolini, Ranik
    Carneiro, Raphael Vivacqua
    Azevedo, Pedro
    Cardoso, Vinicius B.
    Forechi, Avelino
    Jesus, Luan
    Berriel, Rodrigo
    Paixao, Thiago M.
    Mutz, Filipe
    Veronese, Lucas de Paula
    Oliveira-Santos, Thiago
    De Souza, Alberto F.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165
  • [7] The social dilemma of autonomous vehicles
    Bonnefon, Jean-Francois
    Shariff, Azim
    Rahwan, Iyad
    [J]. SCIENCE, 2016, 352 (6293) : 1573 - 1576
  • [8] BROQUA F, 1991, ADVANCED TELEMATICS IN ROAD TRANSPORT, VOLS 1 AND 2, P908
  • [9] Brown K, 2020, Arxiv, DOI arXiv:2006.08832
  • [10] Game-Theoretic Inverse Reinforcement Learning: A Differential Pontryagin's Maximum Principle Approach
    Cao, Kun
    Xie, Lihua
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 9506 - 9513