Predicting Trust in Human Control of Swarms via Inverse Reinforcement Learning

被引:0
|
作者
Nam, Changjoo [1 ]
Walker, Phillip [2 ]
Lewis, Michael [2 ]
Sycara, Katia [1 ]
机构
[1] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[2] Univ Pittsburgh, Sch Informat Sci, Pittsburgh, PA 15260 USA
来源
2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) | 2017年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we study the model of human trust where an operator controls a robotic swarm remotely for a search mission. Existing trust models in human-in the-loop systems are based on task performance of robots. However, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since task performance of swarms is not clearly perceivable by humans. We formulate trust as a Markov decision process whose state space includes physical parameters of the swarm. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from a single demonstration. The learned behaviors are used to predict the trust level of the operator based on the features of the swarm.
引用
收藏
页码:528 / 533
页数:6
相关论文
共 50 条
  • [21] Inverse Reinforcement Learning via Deep Gaussian Process
    Jin, Ming
    Damianou, Andreas
    Abbeel, Pieter
    Spanos, Costas
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,
  • [22] Understanding Sequential Decisions via Inverse Reinforcement Learning
    Liu, Siyuan
    Araujo, Miguel
    Brunskill, Emma
    Rossetti, Rosaldo
    Barros, Joao
    Krishnan, Ramayya
    2013 IEEE 14TH INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT (MDM 2013), VOL 1, 2013, : 177 - 186
  • [23] Learning Human-Aware Robot Navigation from Physical Interaction via Inverse Reinforcement Learning
    Kollmitz, Marina
    Koller, Torsten
    Boedecker, Joschka
    Burgard, Wolfram
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 11025 - 11031
  • [24] From inverse optimal control to inverse reinforcement learning: A historical review
    Ab Azar, Nematollah
    Shahmansoorian, Aref
    Davoudi, Mohsen
    ANNUAL REVIEWS IN CONTROL, 2020, 50 : 119 - 138
  • [25] Inverse reinforcement learning control for building energy management
    Dey, Sourav
    Marzullo, Thibault
    Henze, Gregor
    ENERGY AND BUILDINGS, 2023, 286
  • [26] Inverse Reinforcement Learning Control for Linear Multiplayer Games
    Lian, Bosen
    Donge, Vrushabh S.
    Lewis, Frank L.
    Chai, Tianyou
    Davoudi, Ali
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 2839 - 2844
  • [27] Human motion analysis in medical robotics via high-dimensional inverse reinforcement learning
    Li, Kun
    Burdick, Joel W.
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2020, 39 (05): : 568 - 585
  • [28] Human-in-the-Loop Behavior Modeling via an Integral Concurrent Adaptive Inverse Reinforcement Learning
    Wu, Huai-Ning
    Wang, Mi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (08) : 11359 - 11370
  • [29] Preference-learning based Inverse Reinforcement Learning for Dialog Control
    Sugiyama, Hiroaki
    Meguro, Toyomi
    Minami, Yasuhiro
    13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3, 2012, : 222 - 225
  • [30] Robust Imitation via Mirror Descent Inverse Reinforcement Learning
    Han, Dong-Sig
    Kim, Hyunseo
    Lee, Hyundo
    Ryu, Je-Hwan
    Zhang, Byoung-Tak
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,