Curricular Subgoals for Inverse Reinforcement Learning

被引:0
作者
Liu, Shunyu [1 ]
Qing, Yunpeng [2 ]
Xu, Shuqi [3 ]
Wu, Hongyan [4 ]
Zhang, Jiangtao [4 ]
Cong, Jingyuan [2 ]
Chen, Tianhao [4 ]
Liu, Yun-Fu
Song, Mingli [1 ,5 ]
机构
[1] Zhejiang Univ, State Key Lab Blockchain & Data Secur, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[3] Alibaba Grp, Hangzhou 310027, Peoples R China
[4] Zhejiang Univ, Coll Software Technol, Hangzhou 310027, Peoples R China
[5] Hangzhou High Tech Zone Binjiang, Inst Blockchain & Data Secur, Hangzhou 310051, Peoples R China
关键词
Curricular subgoals; inverse reinforcement learning; reward function;
D O I
10.1109/TITS.2025.3532519
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Inverse Reinforcement Learning (IRL) aims to reconstruct the reward function from expert demonstrations to facilitate policy learning, and has demonstrated its remarkable success in imitation learning. To promote expert-like behavior, existing IRL methods mainly focus on learning global reward functions to minimize the trajectory difference between the imitator and the expert. However, these global designs are still limited by the redundant noise and error propagation problems, leading to the unsuitable reward assignment and thus downgrading the agent capability in complex multi-stage tasks. In this paper, we propose a novel Curricular Subgoal-based Inverse Reinforcement Learning (CSIRL) framework, that explicitly disentangles one task with several local subgoals to guide agent imitation. Specifically, CSIRL firstly introduces decision uncertainty of the trained agent over expert trajectories to dynamically select specific states as subgoals, which directly determines the exploration boundary of different task stages. To further acquire local reward functions for each stage, we customize a meta-imitation objective based on these curricular subgoals to train an intrinsic reward generator. Experiments on the D4RL and autonomous driving benchmarks demonstrate that the proposed methods yields results superior to the state-of-the-art counterparts, as well as better interpretability. Our code is publicly available at https://github.com/Plankson/CSIRL.
引用
收藏
页码:3016 / 3027
页数:12
相关论文
共 50 条
  • [41] Learning Fairness from Demonstrations via Inverse Reinforcement Learning
    Blandin, Jack
    Kash, Ian
    PROCEEDINGS OF THE 2024 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, ACM FACCT 2024, 2024, : 51 - 61
  • [42] Learning Aircraft Pilot Skills by Adversarial Inverse Reinforcement Learning
    Suzuki, Kaito
    Uemura, Tsuneharu
    Tsuchiya, Takeshi
    Beppu, Hirofumi
    Hazui, Yusuke
    Ono, Hitoi
    2023 ASIA-PACIFIC INTERNATIONAL SYMPOSIUM ON AEROSPACE TECHNOLOGY, VOL I, APISAT 2023, 2024, 1050 : 1431 - 1441
  • [43] Methodologies for Imitation Learning via Inverse Reinforcement Learning: A Review
    Zhang K.
    Yu Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2019, 56 (02): : 254 - 261
  • [44] Learning strategies in table tennis using inverse reinforcement learning
    Katharina Muelling
    Abdeslam Boularias
    Betty Mohler
    Bernhard Schölkopf
    Jan Peters
    Biological Cybernetics, 2014, 108 : 603 - 619
  • [45] From inverse optimal control to inverse reinforcement learning: A historical review
    Ab Azar, Nematollah
    Shahmansoorian, Aref
    Davoudi, Mohsen
    ANNUAL REVIEWS IN CONTROL, 2020, 50 : 119 - 138
  • [46] Inverse Reinforcement Learning for Legibility Automation in Intelligent Agents
    Zeng, Buxin
    Zeng, Yifeng
    Pan, Yinghui
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 741 - 746
  • [47] Dynamic multiobjective optimization driven by inverse reinforcement learning
    Zou, Fei
    Yen, Gary G.
    Zhao, Chen
    INFORMATION SCIENCES, 2021, 575 : 468 - 484
  • [48] Inverse Reinforcement Learning for Micro-Turn Management
    Kim, Dongho
    Breslin, Catherine
    Tsiakoulis, Pirros
    Gasic, Milica
    Henderson, Matthew
    Young, Steve
    15TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2014), VOLS 1-4, 2014, : 328 - 332
  • [49] Advances and applications in inverse reinforcement learning: a comprehensive review
    Saurabh Deshpande
    Rahee Walambe
    Ketan Kotecha
    Ganeshsree Selvachandran
    Ajith Abraham
    Neural Computing and Applications, 2025, 37 (17) : 11071 - 11123
  • [50] Sequential Anomaly Detection using Inverse Reinforcement Learning
    Oh, Min-hwan
    Iyengar, Garud
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1480 - 1490