Inverse reinforcement learning through logic constraint inference

被引:4
作者
Baert, Mattijs [1 ]
Leroux, Sam [1 ]
Simoens, Pieter [1 ]
机构
[1] Univ Ghent, imec, Dept Informat Technol, IDLab, Technol pk 126, B-9052 Ghent, Belgium
关键词
Inductive logic programming; Inverse reinforcement learning; Answer set programming; Constraint inference; Constrained Markov decision process;
D O I
10.1007/s10994-023-06311-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Autonomous robots start to be integrated in human environments where explicit and implicit social norms guide the behavior of all agents. To assure safety and predictability, these artificial agents should act in accordance with the applicable social norms. However, it is not straightforward to define these rules and incorporate them in an agent's policy. Particularly because social norms are often implicit and environment specific. In this paper, we propose a novel iterative approach to extract a set of rules from observed human trajectories. This hybrid method combines the strengths of inverse reinforcement learning and inductive logic programming. We experimentally show how our method successfully induces a compact logic program which represents the behavioral constraints applicable in a Tower of Hanoi and a traffic simulator environment. The induced program is adopted as prior knowledge by a model-free reinforcement learning agent to speed up training and prevent any social norm violation during exploration and deployment. Moreover, expressing norms as a logic program provides improved interpretability, which is an important pillar in the design of safe artificial agents, as well as transferability to similar environments.
引用
收藏
页码:2593 / 2618
页数:26
相关论文
共 46 条
[1]  
Alshiekh M, 2018, AAAI CONF ARTIF INTE, P2669
[2]  
Altman E, 1999, STOCH MODEL SER, V1st
[3]  
Amodei D, 2016, Arxiv, DOI [arXiv:1606.06565, 10.48550/arXiv.1606.06565]
[4]  
[Anonymous], 2004, Apprenticeship learning via inverse reinforcement learning
[5]  
Martín JA, 2009, LECT NOTES COMPUT SC, V5717, P75
[6]  
Armesto Leopoldo, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P1520, DOI 10.1109/ICRA.2017.7989181
[7]  
Chou G., 2018, ARXIV
[8]   Learning Constraints From Locally-Optimal Demonstrations Under Cost Function Uncertainty [J].
Chou, Glen ;
Ozay, Necmiye ;
Berenson, Dmitry .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02) :3682-3690
[9]  
Christian Brian, 2020, The Alignment Problem: Machine Learning and Human Values
[10]  
Coppens Youri, 2020, INT WORKSH FDN TRUST, P163