Toward Human-in-the-Loop AI: Enhancing Deep Reinforcement Learning via Real-Time Human Guidance for Autonomous Driving

被引:81
作者
Wu, Jingda [1 ]
Huang, Zhiyu [1 ]
Hu, Zhongxu [1 ]
Lv, Chen [1 ]
机构
[1] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore 639798, Singapore
来源
ENGINEERING | 2023年 / 21卷
关键词
Human-in-the-loop AI; Deep reinforcement learning; Human guidance; Autonomous driving; GO; EXPLORATION; GAME;
D O I
10.1016/j.eng.2022.05.017
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Due to its limited intelligence and abilities, machine learning is currently unable to handle various situations thus cannot completely replace humans in real-world applications. Because humans exhibit robustness and adaptability in complex scenarios, it is crucial to introduce humans into the training loop of artificial intelligence (AI), leveraging human intelligence to further advance machine learning algo-rithms. In this study, a real-time human-guidance-based (Hug)-deep reinforcement learning (DRL) method is developed for policy training in an end-to-end autonomous driving case. With our newly designed mechanism for control transfer between humans and automation, humans are able to intervene and correct the agent's unreasonable actions in real time when necessary during the model training pro-cess. Based on this human-in-the-loop guidance mechanism, an improved actor-critic architecture with modified policy and value networks is developed. The fast convergence of the proposed Hug-DRL allows real-time human guidance actions to be fused into the agent's training loop, further improving the effi-ciency and performance of DRL. The developed method is validated by human-in-the-loop experiments with 40 subjects and compared with other state-of-the-art learning approaches. The results suggest that the proposed method can effectively enhance the training efficiency and performance of the DRL algo-rithm under human guidance without imposing specific requirements on participants' expertise or experience.(c) 2022 THE AUTHORS. Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:75 / 91
页数:17
相关论文
共 39 条
[31]   A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play [J].
Silver, David ;
Hubert, Thomas ;
Schrittwieser, Julian ;
Antonoglou, Ioannis ;
Lai, Matthew ;
Guez, Arthur ;
Lanctot, Marc ;
Sifre, Laurent ;
Kumaran, Dharshan ;
Graepel, Thore ;
Lillicrap, Timothy ;
Simonyan, Karen ;
Hassabis, Demis .
SCIENCE, 2018, 362 (6419) :1140-+
[32]   Mastering the game of Go without human knowledge [J].
Silver, David ;
Schrittwieser, Julian ;
Simonyan, Karen ;
Antonoglou, Ioannis ;
Huang, Aja ;
Guez, Arthur ;
Hubert, Thomas ;
Baker, Lucas ;
Lai, Matthew ;
Bolton, Adrian ;
Chen, Yutian ;
Lillicrap, Timothy ;
Hui, Fan ;
Sifre, Laurent ;
van den Driessche, George ;
Graepel, Thore ;
Hassabis, Demis .
NATURE, 2017, 550 (7676) :354-+
[33]   Mastering the game of Go with deep neural networks and tree search [J].
Silver, David ;
Huang, Aja ;
Maddison, Chris J. ;
Guez, Arthur ;
Sifre, Laurent ;
van den Driessche, George ;
Schrittwieser, Julian ;
Antonoglou, Ioannis ;
Panneershelvam, Veda ;
Lanctot, Marc ;
Dieleman, Sander ;
Grewe, Dominik ;
Nham, John ;
Kalchbrenner, Nal ;
Sutskever, Ilya ;
Lillicrap, Timothy ;
Leach, Madeleine ;
Kavukcuoglu, Koray ;
Graepel, Thore ;
Hassabis, Demis .
NATURE, 2016, 529 (7587) :484-+
[34]   Self-driving cars will take a while to get right [J].
Stilgoe, Jack .
NATURE MACHINE INTELLIGENCE, 2019, 1 (05) :202-203
[35]  
Sutton RS, 2018, ADAPT COMPUT MACH LE, P1
[36]  
Vecerik M, 2018, Arxiv, DOI arXiv:1707.08817
[37]  
Wang F., 2018, C ROBOT LEARNING, P410
[38]  
Wolf P, 2017, IEEE INT VEH SYM, P244, DOI 10.1109/IVS.2017.7995727
[39]  
Ziebart B. D., 2008, P AAAI C ART INT, P1433