Deep reinforcement learning for cooperative robots based on adaptive sentiment feedback

被引:6
作者
Jeon, Haein [1 ]
Kim, Dae-Won [2 ]
Kang, Bo-Yeong [3 ]
机构
[1] Kyungpook Natl Univ, Dept Artificial Intelligence, Daegu 41566, South Korea
[2] Chung Ang Univ, Sch Comp Sci & Engn, Seoul 06974, South Korea
[3] Kyungpook Natl Univ, Dept Robot & Smart Syst Engn, Daegu 41566, South Korea
基金
新加坡国家研究基金会;
关键词
Human-robot interaction; Deep reinforcement learning; Interactive reinforcement learning; Human-in-the-loop; Reward shaping;
D O I
10.1016/j.eswa.2023.121198
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human-robot cooperative tasks have gained importance with the emergence of robotics and artificial intelligence technology. In interactive reinforcement learning techniques, robots learn target tasks by receiving feedback from an experienced human trainer. However, most interactive reinforcement learning studies require a separate process to integrate the trainer's feedback into the training dataset, making it challenging for robots to learn new tasks from humans in real-time. Furthermore, the types of feedback sentences that trainers can use are limited in previous research. To address these limitations, this paper proposes a robot teaching strategy that uses deep RL via human-robot interaction to learn table balancing tasks interactively. The proposed system employs Deep Q-Network with real-time sentiment feedback delivered through the trainer's speech to learn cooperative tasks. We designed a novel reward function that incorporates sentiment feedback from human speech in real-time during the learning process. The paper presents an improved reward shaping technique based on subdivided feedback levels and shrinking feedback. This function serves as a guide for the robot to engage in natural interactions with humans and enables it to learn the tasks effectively. Experimental results demonstrate that the proposed interactive deep reinforcement learning model achieved a high success rate of up to 99.06%, outperforming the model without sentiment feedback.
引用
收藏
页数:11
相关论文
共 42 条
  • [41] A survey of human-in-the-loop for machine learning
    Wu, Xingjiao
    Xiao, Luwei
    Sun, Yixuan
    Zhang, Junhang
    Ma, Tianlong
    He, Liang
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 135 : 364 - 381
  • [42] User-guided motion planning with reinforcement learning for human-robot collaboration in smart manufacturing
    Yu, Tian
    Chang, Qing
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 209