Weak Human Preference Supervision for Deep Reinforcement Learning

被引:37
作者
Cao, Zehong [1 ]
Wong, KaiChiu [2 ,3 ]
Lin, Chin-Teng [4 ,5 ]
机构
[1] Univ South Australia, STEM, Mawson Lakes Campus, Adelaide, SA 5095, Australia
[2] Univ Tasmania, Sch Informat & Commun Technol ICT, Hobart, Tas 7005, Australia
[3] MyState Bank, Hobart, Tas 7000, Australia
[4] Univ Technol Sydney, Australian Artificial Intelligence Inst AAII, Ultimo, NSW 2007, Australia
[5] Univ Technol Sydney, Sch Comp Sci, Ultimo, NSW 2007, Australia
基金
澳大利亚研究理事会;
关键词
Training; Trajectory; Task analysis; Robots; Supervised learning; Australia; Reinforcement learning; Deep reinforcement learning (RL); scaling; supervised learning; weak human preferences;
D O I
10.1109/TNNLS.2021.3084198
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The current reward learning from human preferences could be used to resolve complex reinforcement learning (RL) tasks without access to a reward function by defining a single fixed preference between pairs of trajectory segments. However, the judgment of preferences between trajectories is not dynamic and still requires human input over thousands of iterations. In this study, we proposed a weak human preference supervision framework, for which we developed a human preference scaling model that naturally reflects the human perception of the degree of weak choices between trajectories and established a human-demonstration estimator through supervised learning to generate the predicted preferences for reducing the number of human inputs. The proposed weak human preference supervision framework can effectively solve complex RL tasks and achieve higher cumulative rewards in simulated robot locomotion-MuJoCo games-relative to the single fixed human preferences. Furthermore, our established human-demonstration estimator requires human feedback only for less than 0.01% of the agent's interactions with the environment and significantly reduces the cost of human inputs by up to 30% compared with the existing approaches. To present the flexibility of our approach, we released a video (https://youtu.be/jQPe1OILT0M) showing comparisons of the behaviors of agents trained on different types of human input. We believe that our naturally inspired human preferences with weakly supervised learning are beneficial for precise reward learning and can be applied to state-of-the-art RL systems, such as human-autonomy teaming systems.
引用
收藏
页码:5369 / 5378
页数:10
相关论文
共 28 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]   Modelling Human Decision Behaviour with Preference Learning [J].
Aggarwal, Manish ;
Tehrani, Ali Fallah .
INFORMS JOURNAL ON COMPUTING, 2019, 31 (02) :318-334
[3]  
Amodei Dario, 2016, ARXIV, DOI 10.48550/ARXIV.1606.06565
[4]  
[Anonymous], 1988, Notes on the Theory of Choice
[5]  
[Anonymous], 2019, ARXIV190203079
[6]  
Bogert K, 2016, AAMAS'16: PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS, P1034
[7]  
Brockman Greg, 2016, arXiv
[8]  
Cao Z., 2020, P 19 INT C AUT AG MU, P2095
[9]  
Christiano PF, 2017, ADV NEUR IN, V30
[10]   Preference-based reinforcement learning: a formal framework and a policy iteration algorithm [J].
Fuernkranz, Johannes ;
Huellermeier, Eyke ;
Cheng, Weiwei ;
Park, Sang-Hyeun .
MACHINE LEARNING, 2012, 89 (1-2) :123-156