Curricular Subgoals for Inverse Reinforcement Learning

被引:0
|
作者
Liu, Shunyu [1 ]
Qing, Yunpeng [2 ]
Xu, Shuqi [3 ]
Wu, Hongyan [4 ]
Zhang, Jiangtao [4 ]
Cong, Jingyuan [2 ]
Chen, Tianhao [4 ]
Liu, Yun-Fu
Song, Mingli [1 ,5 ]
机构
[1] Zhejiang Univ, State Key Lab Blockchain & Data Secur, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[3] Alibaba Grp, Hangzhou 310027, Peoples R China
[4] Zhejiang Univ, Coll Software Technol, Hangzhou 310027, Peoples R China
[5] Hangzhou High Tech Zone Binjiang, Inst Blockchain & Data Secur, Hangzhou 310051, Peoples R China
关键词
Curricular subgoals; inverse reinforcement learning; reward function;
D O I
10.1109/TITS.2025.3532519
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Inverse Reinforcement Learning (IRL) aims to reconstruct the reward function from expert demonstrations to facilitate policy learning, and has demonstrated its remarkable success in imitation learning. To promote expert-like behavior, existing IRL methods mainly focus on learning global reward functions to minimize the trajectory difference between the imitator and the expert. However, these global designs are still limited by the redundant noise and error propagation problems, leading to the unsuitable reward assignment and thus downgrading the agent capability in complex multi-stage tasks. In this paper, we propose a novel Curricular Subgoal-based Inverse Reinforcement Learning (CSIRL) framework, that explicitly disentangles one task with several local subgoals to guide agent imitation. Specifically, CSIRL firstly introduces decision uncertainty of the trained agent over expert trajectories to dynamically select specific states as subgoals, which directly determines the exploration boundary of different task stages. To further acquire local reward functions for each stage, we customize a meta-imitation objective based on these curricular subgoals to train an intrinsic reward generator. Experiments on the D4RL and autonomous driving benchmarks demonstrate that the proposed methods yields results superior to the state-of-the-art counterparts, as well as better interpretability. Our code is publicly available at https://github.com/Plankson/CSIRL.
引用
收藏
页码:3016 / 3027
页数:12
相关论文
共 50 条
  • [31] A critical state identification approach to inverse reinforcement learning for autonomous systems
    Hwang, Maxwell
    Jiang, Wei-Cheng
    Chen, Yu-Jen
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (05) : 1409 - 1423
  • [32] An Ensemble Fuzzy Approach for Inverse Reinforcement Learning
    Pan, Wei
    Qu, Ruopeng
    Hwang, Kao-Shing
    Lin, Hung-Shyuan
    INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2019, 21 (01) : 95 - 103
  • [33] Machine Teaching for Human Inverse Reinforcement Learning
    Lee, Michael S.
    Admoni, Henny
    Simmons, Reid
    FRONTIERS IN ROBOTICS AND AI, 2021, 8
  • [34] Deep Inverse Reinforcement Learning by Logistic Regression
    Uchibe, Eiji
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT I, 2016, 9947 : 23 - 31
  • [35] Off-Dynamics Inverse Reinforcement Learning
    Kang, Yachen
    Liu, Jinxin
    Wang, Donglin
    IEEE ACCESS, 2024, 12 : 65117 - 65127
  • [36] An Inverse Reinforcement Learning Algorithm for semi-Markov Decision Processes
    Tan, Chuanfang
    Li, Yanjie
    Cheng, Yuhu
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 1256 - 1261
  • [37] A critical state identification approach to inverse reinforcement learning for autonomous systems
    Maxwell Hwang
    Wei-Cheng Jiang
    Yu-Jen Chen
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 1409 - 1423
  • [38] A personalized ranking method based on inverse reinforcement learning in search engines
    Karamiyan, Fatemeh
    Mahootchi, Masoud
    Mohebi, Azadeh
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 136
  • [39] Learning strategies in table tennis using inverse reinforcement learning
    Muelling, Katharina
    Boularias, Abdeslam
    Mohler, Betty
    Schoelkopf, Bernhard
    Peters, Jan
    BIOLOGICAL CYBERNETICS, 2014, 108 (05) : 603 - 619
  • [40] Learning strategies in table tennis using inverse reinforcement learning
    Katharina Muelling
    Abdeslam Boularias
    Betty Mohler
    Bernhard Schölkopf
    Jan Peters
    Biological Cybernetics, 2014, 108 : 603 - 619