Toward Human-in-the-Loop AI: Enhancing Deep Reinforcement Learning via Real-Time Human Guidance for Autonomous Driving

被引:81
作者
Wu, Jingda [1 ]
Huang, Zhiyu [1 ]
Hu, Zhongxu [1 ]
Lv, Chen [1 ]
机构
[1] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore 639798, Singapore
来源
ENGINEERING | 2023年 / 21卷
关键词
Human-in-the-loop AI; Deep reinforcement learning; Human guidance; Autonomous driving; GO; EXPLORATION; GAME;
D O I
10.1016/j.eng.2022.05.017
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Due to its limited intelligence and abilities, machine learning is currently unable to handle various situations thus cannot completely replace humans in real-world applications. Because humans exhibit robustness and adaptability in complex scenarios, it is crucial to introduce humans into the training loop of artificial intelligence (AI), leveraging human intelligence to further advance machine learning algo-rithms. In this study, a real-time human-guidance-based (Hug)-deep reinforcement learning (DRL) method is developed for policy training in an end-to-end autonomous driving case. With our newly designed mechanism for control transfer between humans and automation, humans are able to intervene and correct the agent's unreasonable actions in real time when necessary during the model training pro-cess. Based on this human-in-the-loop guidance mechanism, an improved actor-critic architecture with modified policy and value networks is developed. The fast convergence of the proposed Hug-DRL allows real-time human guidance actions to be fused into the agent's training loop, further improving the effi-ciency and performance of DRL. The developed method is validated by human-in-the-loop experiments with 40 subjects and compared with other state-of-the-art learning approaches. The results suggest that the proposed method can effectively enhance the training efficiency and performance of the DRL algo-rithm under human guidance without imposing specific requirements on participants' expertise or experience.(c) 2022 THE AUTHORS. Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:75 / 91
页数:17
相关论文
共 39 条
[1]  
[Anonymous], 2017, Electron. Imaging, DOI 10.2352/
[2]  
Badia A. P., 2020, INT C LEARN REPR, P1
[3]   High-Speed Autonomous Drifting With Deep Reinforcement Learning [J].
Cai, Peide ;
Mei, Xiaodong ;
Tai, Lei ;
Sun, Yuxiang ;
Liu, Ming .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02) :1247-1254
[4]   Exploring the Limitations of Behavior Cloning for Autonomous Driving [J].
Codevilla, Felipe ;
Santana, Eder ;
Lopez, Antonio M. ;
Gaidon, Adrien .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9328-9337
[5]  
Codevilla F, 2018, IEEE INT CONF ROBOT, P4693
[6]   Drivers 'reaction time research in the conditions in the real traffic [J].
Drozdziel, Pawel ;
Tarkowski, Slawomir ;
Rybicka, Iwona ;
Wrona, Rafal .
OPEN ENGINEERING, 2020, 10 (01) :35-47
[7]   Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment [J].
Feng, Shuo ;
Yan, Xintao ;
Sun, Haowei ;
Feng, Yiheng ;
Liu, Henry X. .
NATURE COMMUNICATIONS, 2021, 12 (01)
[8]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[9]  
Haarnoja T, 2018, PR MACH LEARN RES, V80
[10]  
Harutyunyan A, 2019, P 33 C NEUR INF PROC, P12498