A Sensorimotor Reinforcement Learning Framework for Physical Human-Robot Interaction

被引:0
|
作者
Ghadirzadeh, Ali [1 ]
Butepage, Judith [1 ]
Maki, Atsuto [1 ]
Kragic, Danica [1 ]
Bjorkman, Marten [1 ]
机构
[1] KTH Royal Inst Technol, CSC, Comp Vis & Act Percept Lab CVAP, Stockholm, Sweden
基金
欧盟地平线“2020”; 瑞典研究理事会;
关键词
GAUSSIAN-PROCESSES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty in the interaction is modeled using Gaussian processes (GP) to implement a forward model and an action value function. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainty and equal role sharing between the partners.
引用
收藏
页码:2682 / 2688
页数:7
相关论文
共 50 条
  • [31] SMARTTALK: A Learning-based Framework for Natural Human-Robot Interaction
    Fabbri, Cameron
    Sattar, Junaed
    2016 13TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2016, : 376 - 382
  • [32] A robot for overground physical human-robot interaction experiments
    Regmi, Sambad
    Burns, Devin
    Song, Yun Seong
    PLOS ONE, 2022, 17 (11):
  • [33] Intrinsically motivated reinforcement learning for human-robot interaction in the real-world
    Qureshi, Ahmed Hussain
    Nakamura, Yutaka
    Yoshikawa, Yuichiro
    Ishiguro, Hiroshi
    NEURAL NETWORKS, 2018, 107 : 23 - 33
  • [34] Constructive learning for human-robot interaction
    Singh, Amarjot
    Karanam, Srikrishna
    Kumar, Devinder
    IEEE Potentials, 2013, 32 (04): : 13 - 19
  • [35] Online Learning of Varying Stiffness Through Physical Human-Robot Interaction
    Kronander, Klas
    Billard, Aude
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2012, : 1842 - 1849
  • [36] Ergodicity reveals assistance and learning from physical human-robot interaction
    Fitzsimons, Kathleen
    Acosta, Ana Maria
    Dewald, Julius P. A.
    Murphey, Todd D.
    SCIENCE ROBOTICS, 2019, 4 (29)
  • [37] Role Adaptation and Force, Impedance Learning For Physical Human-Robot Interaction
    Bi, Wei
    Wu, Xiaoyu
    Liu, Yueyue
    Li, Zhijun
    2019 IEEE 4TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2019), 2019, : 111 - 117
  • [38] Online learning for human-robot interaction
    Raducanu, Bogdan
    Vitria, Jordi
    2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8, 2007, : 3342 - +
  • [39] A Machine Learning Approach to Resolving Conflicts in Physical Human-Robot Interaction
    Ulas Dincer, Enes
    Al-Saadi, Zaid
    Hamad, Yahya M.
    Aydin, Yusuf
    Kucukyilmaz, Ayse
    Basdogan, Cagatay
    ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION, 2025, 14 (02)
  • [40] A Framework for Human-Robot Interaction User Studies
    Rajendran, Vidyasagar
    Carreno-Medrano, Pamela
    Fisher, Wesley
    Werner, Alexander
    Kulic, Dana
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 6215 - 6222