An adaptive reinforcement learning-based multimodal data fusion framework for human-robot confrontation gaming

被引:36
|
作者
Qi, Wen [1 ,2 ]
Fan, Haoyu [1 ]
Karimi, Hamid Reza [3 ]
Su, Hang [4 ]
机构
[1] South China Univ Technol, Sch Future Technol, Guangzhou 511436, Peoples R China
[2] Pazhou Lab, Guangzhou 510330, Peoples R China
[3] Politecn Milan, Dept Mech Engn, I-20156 Milan, Italy
[4] Politecn Milan, Dept Elect Informat & Bioengn, I-20133 Milan, Italy
关键词
Reinforcement learning; Multimodal data fusion; Human-robot confrontation; Adaptive learning; Multiple sensors fusion; Hand Gesture Recognition; HAND GESTURE RECOGNITION; VISION;
D O I
10.1016/j.neunet.2023.04.043
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Playing games between humans and robots have become a widespread human-robot confrontation (HRC) application. Although many approaches were proposed to enhance the tracking accuracy by combining different information, the problems of the intelligence degree of the robot and the anti -interference ability of the motion capture system still need to be solved. In this paper, we present an adaptive reinforcement learning (RL) based multimodal data fusion (AdaRL-MDF) framework teaching the robot hand to play Rock-Paper-Scissors (RPS) game with humans. It includes an adaptive learning mechanism to update the ensemble classifier, an RL model providing intellectual wisdom to the robot, and a multimodal data fusion structure offering resistance to interference. The corresponding experiments prove the mentioned functions of the AdaRL-MDF model. The comparison accuracy and computational time show the high performance of the ensemble model by combining k-nearest neighbor (k-NN) and deep convolutional neural network (DCNN). In addition, the depth vision-based k-NN classifier obtains a 100% identification accuracy so that the predicted gestures can be regarded as the real value. The demonstration illustrates the real possibility of HRC application. The theory involved in this model provides the possibility of developing HRC intelligence.(c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:489 / 496
页数:8
相关论文
共 50 条
  • [1] A Framework and Algorithm for Human-Robot Collaboration Based on Multimodal Reinforcement Learning
    Cai, Zeyuan
    Feng, Zhiquan
    Zhou, Liran
    Ai, Changsheng
    Shao, Haiyan
    Yang, Xiaohui
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [2] Deep Learning-based Multimodal Control Interface for Human-Robot Collaboration
    Liu, Hongyi
    Fang, Tongtong
    Zhou, Tianyu
    Wang, Yuquan
    Wang, Lihui
    51ST CIRP CONFERENCE ON MANUFACTURING SYSTEMS, 2018, 72 : 3 - 8
  • [3] Impedance Learning-Based Adaptive Control for Human-Robot Interaction
    Sharifi, Mojtaba
    Azimi, Vahid
    Mushahwar, Vivian K.
    Tavakoli, Mahdi
    IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2022, 30 (04) : 1345 - 1358
  • [4] SMARTTALK: A Learning-based Framework for Natural Human-Robot Interaction
    Fabbri, Cameron
    Sattar, Junaed
    2016 13TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2016, : 376 - 382
  • [5] A Learning-Based Adjustable Autonomy Framework for Human-Robot Collaboration
    Rabby, Md Khurram Monir
    Karimoddini, Ali
    Khan, Mubbashar Altaf
    Jiang, Steven
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (09) : 6171 - 6180
  • [6] A Robust Multimodal Fusion Framework for Command Interpretation in Human-Robot Cooperation
    Cacace, Jonathan
    Finzi, Alberto
    Lippiello, Vincenzo
    2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 372 - 377
  • [7] A Sensorimotor Reinforcement Learning Framework for Physical Human-Robot Interaction
    Ghadirzadeh, Ali
    Butepage, Judith
    Maki, Atsuto
    Kragic, Danica
    Bjorkman, Marten
    2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 2682 - 2688
  • [8] Learning-Based Multimodal Control for a Supernumerary Robotic System in Human-Robot Collaborative Sorting
    Du, Yuwei
    Ben Amor, Heni
    Jin, Jing
    Wang, Qiang
    Ajoudani, Arash
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (04) : 3435 - 3442
  • [9] Multimodal Information Fusion for Human-Robot Interaction
    Luo, Ren C.
    Wu, Y. C.
    Lin, P. H.
    2015 IEEE 10TH JUBILEE INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2015, : 535 - 540
  • [10] Adaptive Admittance Control for Physical Human-Robot Interaction based on Imitation and Reinforcement Learning
    Guo, Mou
    Yao, Bitao
    Ji, Zhenrui
    Xu, Wenjun
    Zhou, Zude
    2023 29TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND MACHINE VISION IN PRACTICE, M2VIP 2023, 2023,