Learning Implicit Social Navigation Behavior Using Deep Inverse Reinforcement Learning

被引:0
作者
Kathuria, Tribhi [1 ]
Liu, Ke [1 ]
Jang, Junwoo [1 ,2 ]
Yang, X. Jessie [1 ]
Ghaffari, Maani [1 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
[2] Inha Univ, Dept Smart Mobil Engn, Incheon 21999, South Korea
关键词
Navigation; Robots; Trajectory; Reinforcement learning; Geometry; Planning; Training; System recovery; Entropy; Costs; social HRI; learning from demonstration; deep learning methods;
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This paper reports on learning a reward map for social navigation in dynamic environments where the robot can reason about its path at any time, given agent trajectories and scene geometry. Humans navigating in dense and dynamic indoor environments often work with several implied social rules. A rule-based approach fails to model all possible interactions between humans, robots, and scenes. We propose a novel Smooth Maximum Entropy Deep Inverse Reinforcement Learning (S-MEDIRL) algorithm that can extrapolate beyond expert demos to better encode scene navigability from few-shot demonstrations. The agent learns to predict the cost maps based on trajectory data as well as scene geometry. The trajectory sampled from the learned cost map is then executed using a local crowd navigation controller. We present results in a photo-realistic simulation environment, with a robot and a human navigating a narrow crossing scenario. The robot implicitly learns to exhibit social behaviors such as yielding to oncoming traffic and avoiding deadlocks. We compare the proposed approach to the popular model-based crowd navigation algorithm ORCA and a rule-based agent that exhibits yielding.
引用
收藏
页码:5146 / 5153
页数:8
相关论文
共 46 条
[1]  
Arul S. H., 2023, IEEE RSJ INT C INT R
[2]   MEDIRL: Predicting the Visual Attention of Drivers via Maximum Entropy Deep Inverse Reinforcement Learning [J].
Baee, Sonia ;
Pakdamanian, Erfan ;
Kim, Inki ;
Feng, Lu ;
Ordonez, Vicente ;
Barnes, Laura .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :13158-13168
[3]   Active preference-based Gaussian process regression for reward learning and optimization [J].
Biyik, Erdem ;
Huynh, Nicolas ;
Kochenderfer, Mykel J. ;
Sadigh, Dorsa .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2024, 43 (05) :665-684
[4]  
Biyik E, 2020, Arxiv, DOI arXiv:2005.02575
[5]  
Chandra R, 2023, Arxiv, DOI arXiv:2306.08815
[6]  
Chen CG, 2019, IEEE INT CONF ROBOT, P6015, DOI [10.1109/ICRA.2019.8794134, 10.1109/icra.2019.8794134]
[7]  
Chen YF, 2017, IEEE INT C INT ROBOT, P1343, DOI 10.1109/IROS.2017.8202312
[8]  
Deo N, 2021, Arxiv, DOI [arXiv:2001.00735, DOI 10.48550/ARXIV.2001.00735]
[9]   A Survey of Embodied AI: From Simulators to Research Tasks [J].
Duan, Jiafei ;
Yu, Samson ;
Tan, Hui Li ;
Zhu, Hongyuan ;
Tan, Cheston .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (02) :230-244
[10]   Walking Ahead: The Headed Social Force Model [J].
Farina, Francesco ;
Fontanelli, Daniele ;
Garulli, Andrea ;
Giannitrapani, Antonio ;
Prattichizzo, Domenico .
PLOS ONE, 2017, 12 (01)