Contextual Action with Multiple Policies Inverse Reinforcement Learning for Behavior Simulation

被引:0
|
作者
Alvarez, Nahum [1 ]
Noda, Itsuki [1 ]
机构
[1] Natl Inst Adv Ind Sci & Technol, Tokyo, Japan
来源
PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2 | 2019年
关键词
Inverse Reinforcement Learning; Behavioral Agents; Pedestrian Simulation;
D O I
10.5220/0007684908870894
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning is a discipline with many simulator-driven applications oriented to learn behavior. However, behavior simulation it comes with a number of associated difficulties, like the lack of a clear reward function, actions that depend of the state of the actor and the alternation of different policies. We present a method for behavior learning called Contextual Action Multiple Policy Inverse Reinforcement Learning (CAMP-IRL) that tackles those factors. Our method allows to extract multiple reward functions and generates different behavior profiles from them. We applied our method to a large scale crowd simulator using intelligent agents to imitate pedestrian behavior, making the virtual pedestrians able to switch between behaviors depending of the goal they have and navigating efficiently across unknown environments.
引用
收藏
页码:887 / 894
页数:8
相关论文
共 50 条
  • [41] Option compatible reward inverse reinforcement learning
    Hwang, Rakhoon
    Lee, Hanjin
    Hwang, Hyung Ju
    PATTERN RECOGNITION LETTERS, 2022, 154 : 83 - 89
  • [42] Inverse reinforcement learning from summary data
    Kangasraasio, Antti
    Kaski, Samuel
    MACHINE LEARNING, 2018, 107 (8-10) : 1517 - 1535
  • [43] Neural inverse reinforcement learning in autonomous navigation
    Xia, Chen
    El Kamel, Abdelkader
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2016, 84 : 1 - 14
  • [44] An Ensemble Fuzzy Approach for Inverse Reinforcement Learning
    Wei Pan
    Ruopeng Qu
    Kao-Shing Hwang
    Hung-Shyuan Lin
    International Journal of Fuzzy Systems, 2019, 21 : 95 - 103
  • [45] Inverse Reinforcement Learning based on Critical State
    Hwang, Kao-Shing
    Cheng, Tien-Yu
    Jiang, Wei-Cheng
    PROCEEDINGS OF THE 2015 CONFERENCE OF THE INTERNATIONAL FUZZY SYSTEMS ASSOCIATION AND THE EUROPEAN SOCIETY FOR FUZZY LOGIC AND TECHNOLOGY, 2015, 89 : 771 - 775
  • [46] An Ensemble Fuzzy Approach for Inverse Reinforcement Learning
    Pan, Wei
    Qu, Ruopeng
    Hwang, Kao-Shing
    Lin, Hung-Shyuan
    INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2019, 21 (01) : 95 - 103
  • [47] Machine Teaching for Human Inverse Reinforcement Learning
    Lee, Michael S.
    Admoni, Henny
    Simmons, Reid
    FRONTIERS IN ROBOTICS AND AI, 2021, 8
  • [48] Deep Inverse Reinforcement Learning by Logistic Regression
    Uchibe, Eiji
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT I, 2016, 9947 : 23 - 31
  • [49] Off-Dynamics Inverse Reinforcement Learning
    Kang, Yachen
    Liu, Jinxin
    Wang, Donglin
    IEEE ACCESS, 2024, 12 : 65117 - 65127
  • [50] Estimating consistent reward of expert in multiple dynamics via linear programming inverse reinforcement learning
    Nakata Y.
    Arai S.
    Transactions of the Japanese Society for Artificial Intelligence, 2019, 34 (06)