Model-Based Reinforcement Learning Variable Impedance Control for Human-Robot Collaboration

被引:0
|
作者
Loris Roveda
Jeyhoon Maskani
Paolo Franceschi
Arash Abdi
Francesco Braghin
Lorenzo Molinari Tosatti
Nicola Pedrocchi
机构
[1] Institute of Intelligent Industrial Systems and Technologies for Advanced Manufacturing (STIIMA-CNR),Istituto Dalle Molle di studi sull’Intelligenza Artificiale (IDSIA)
[2] Scuola Universitaria Professionale della Svizzera Italiana (SUPSI),undefined
[3] Università della Svizzera Italiana (USI) IDSIA-SUPSI,undefined
[4] School of Industrial and Information Engineering Politecnico di Milano,undefined
来源
Journal of Intelligent & Robotic Systems | 2020年 / 100卷
关键词
Human-robot collaboration; Machine learning; Industry 4.0; Model-based reinforcement learning control; Variable impedance control;
D O I
暂无
中图分类号
学科分类号
摘要
Industry 4.0 is taking human-robot collaboration at the center of the production environment. Collaborative robots enhance productivity and flexibility while reducing human’s fatigue and the risk of injuries, exploiting advanced control methodologies. However, there is a lack of real-time model-based controllers accounting for the complex human-robot interaction dynamics. With this aim, this paper proposes a Model-Based Reinforcement Learning (MBRL) variable impedance controller to assist human operators in collaborative tasks. More in details, an ensemble of Artificial Neural Networks (ANNs) is used to learn a human-robot interaction dynamic model, capturing uncertainties. Such a learned model is kept updated during collaborative tasks execution. In addition, the learned model is used by a Model Predictive Controller (MPC) with Cross-Entropy Method (CEM). The aim of the MPC+CEM is to online optimize the stiffness and damping impedance control parameters minimizing the human effort (i.e, minimizing the human-robot interaction forces). The proposed approach has been validated through an experimental procedure. A lifting task has been considered as the reference validation application (weight of the manipulated part: 10 kg unknown to the robot controller). A KUKA LBR iiwa 14 R820 has been used as a test platform. Qualitative performance (i.e, questionnaire on perceived collaboration) have been evaluated. Achieved results have been compared with previous developed offline model-free optimized controllers and with the robot manual guidance controller. The proposed MBRL variable impedance controller shows improved human-robot collaboration. The proposed controller is capable to actively assist the human in the target task, compensating for the unknown part weight. The human-robot interaction dynamic model has been trained with a few initial experiments (30 initial experiments). In addition, the possibility to keep the learning of the human-robot interaction dynamics active allows accounting for the adaptation of human motor system.
引用
收藏
页码:417 / 433
页数:16
相关论文
共 50 条
  • [1] Model-Based Reinforcement Learning Variable Impedance Control for Human-Robot Collaboration
    Roveda, Loris
    Maskani, Jeyhoon
    Franceschi, Paolo
    Abdi, Arash
    Braghin, Francesco
    Tosatti, Lorenzo Molinari
    Pedrocchi, Nicola
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2020, 100 (02) : 417 - 433
  • [2] Q-Learning-based model predictive variable impedance control for physical human-robot collaboration
    Roveda, Loris
    Testa, Andrea
    Shahid, Asad Ali
    Braghin, Francesco
    Piga, Dario
    ARTIFICIAL INTELLIGENCE, 2022, 312
  • [3] Explainable Reinforcement Learning for Human-Robot Collaboration
    Iucci, Alessandro
    Hata, Alberto
    Terra, Ahmad
    Inam, Rafia
    Leite, Iolanda
    2021 20TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2021, : 927 - 934
  • [4] Human-Robot Collaboration Framework Based on Impedance Control in Robotic Assembly
    Zhao, Xingwei
    Chen, Yiming
    Qian, Lu
    Tao, Bo
    Ding, Han
    ENGINEERING, 2023, 30 : 83 - 92
  • [5] A Learning Based Hierarchical Control Framework for Human-Robot Collaboration
    Jin, Zhehao
    Liu, Andong
    Zhang, Wen-An
    Yu, Li
    Su, Chun-Yi
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2023, 20 (01) : 506 - 517
  • [6] Assembly task allocation of human-robot collaboration based on deep reinforcement learning
    Xiong Z.
    Chen H.
    Wang C.
    Yue M.
    Hou W.
    Xu B.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2023, 29 (03): : 789 - 800
  • [7] A reinforcement learning method for human-robot collaboration in assembly tasks
    Zhang, Rong
    Lv, Qibing
    Li, Jie
    Bao, Jinsong
    Liu, Tianyuan
    Liu, Shimin
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2022, 73
  • [8] A Human-Robot Collaboration Framework Based on Human Collaboration Demonstration and Robot Learning
    Peng, Xiang
    Jiang, Jingang
    Xia, Zeyang
    Xiong, Jing
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2024, PT VII, 2025, 15207 : 286 - 299
  • [9] Safety-constrained Deep Reinforcement Learning control for human-robot collaboration in construction
    Duan, Kangkang
    Zou, Zhengbo
    AUTOMATION IN CONSTRUCTION, 2025, 174
  • [10] Impedance learning control for physical human-robot cooperative interaction
    Brahmi, Brahim
    El Bojairami, Ibrahim
    Laraki, Mohamed-Hamza
    El-Bayeh, Claude Ziad
    Saad, Maarouf
    MATHEMATICS AND COMPUTERS IN SIMULATION, 2021, 190 : 1224 - 1242