Social interaction for efficient agent learning from human reward

被引:11
作者
Li, Guangliang [1 ]
Whiteson, Shimon [2 ]
Knox, W. Bradley [3 ]
Hung, Hayley [4 ]
机构
[1] Ocean Univ China, Qingdao, Peoples R China
[2] Univ Oxford, Oxford, England
[3] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[4] Delft Univ Technol, Delft, Netherlands
基金
中国博士后科学基金;
关键词
Reinforcement learning; Human agent interaction; Learning from human reward; Gamification; ROBOT;
D O I
10.1007/s10458-017-9374-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning from rewards generated by a human trainer observing an agent in action has been proven to be a powerful method for teaching autonomous agents to perform challenging tasks, especially for those non-technical users. Since the efficacy of this approach depends critically on the reward the trainer provides, we consider how the interaction between the trainer and the agent should be designed so as to increase the efficiency of the training process. This article investigates the influence of the agent's socio-competitive feedback on the human trainer's training behavior and the agent's learning. The results of our user study with 85 participants suggest that the agent's passive socio-competitive feedback-showing performance and score of agents trained by trainers in a leaderboard-substantially increases the engagement of the participants in the game task and improves the agents' performance, even though the participants do not directly play the game but instead train the agent to do so. Moreover, making this feedback active-sending the trainer her agent's performance relative to others-further induces more participants to train agents longer and improves the agent's learning. Our further analysis shows that agents trained by trainers affected by both the passive and active social feedback could obtain a higher performance under a score mechanism that could be optimized from the trainer's perspective and the agent's additional active social feedback can keep participants to further train agents to learn policies that can obtain a higher performance under such a score mechanism.
引用
收藏
页码:1 / 25
页数:25
相关论文
共 46 条
  • [1] [Anonymous], 2006, AAAI
  • [2] [Anonymous], 2012, Proceedings of the ACM 2012 Conference, DOI DOI 10.1145/2145204.2145362
  • [3] [Anonymous], 2012, P SIGCH C HUM FACT C, DOI [DOI 10.1145/2207676.2208358, 10.1145/2207676.2208358]
  • [4] [Anonymous], PROC IEEE INT ICORR
  • [5] [Anonymous], 2015, Reinforcement Learning: An Introduction
  • [6] Argall B., 2007, 2007 2nd Annual Conference on Human-Robot Interaction (HRI), P57
  • [7] A survey of robot learning from demonstration
    Argall, Brenna D.
    Chernova, Sonia
    Veloso, Manuela
    Browning, Brett
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2009, 57 (05) : 469 - 483
  • [8] Bertsekas D. P., 1996, Optimization and neural computation series, V3
  • [9] Blumberg B, 2002, ACM T GRAPHIC, V21, P417, DOI 10.1145/566570.566597
  • [10] Bohm Niko, 2004, LWA 2004 LERNEN WISS, P118