Learning Assistive Strategies from a Few User-Robot Interactions: Model-based Reinforcement Learning Approach

被引:0
|
作者
Hamaya, Masashi [1 ,2 ]
Matsubara, Takamitsu [1 ,3 ]
Noda, Tomoyuki [1 ]
Teramae, Tatsuya [1 ]
Morimoto, Jun [1 ]
机构
[1] ATR CNS, Dept Brain Robot Interface, Kyoto, Japan
[2] Osaka Univ, Grad Sch Frontier Bioscience, Osaka, Japan
[3] Nara Inst Sci & Technol, Grad Sch Informat Sci, Nara, Japan
来源
2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA) | 2016年
关键词
ORTHOSIS; WALKING; SUIT;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Designing an assistive strategy for exoskeletons is a key ingredient in movement assistance and rehabilitation. While several approaches have been explored, most studies are based on mechanical models of the human user, i.e., rigidbody dynamics or Center of Mass (CoM)-Zero Moment Point (ZMP) inverted pendulum model, or only focus on periodic movements with using oscillator models. On the other hand, the interactions between the user and the robot are often not considered explicitly because of its difficulty in modeling. In this paper, we propose to learn the assistive strategies directly from interactions between the user and the robot. We formulate the learning problem of assistive strategies as a policy search problem. To alleviate heavy burdens to the user for data acquisition, we exploit a data-efficient model-based reinforcement learning framework. To validate the effectiveness of our approach, an experimental platform composed of a real subject, an electromyography (EMG)-measurement system, and a simulated robot arm is developed. Then, a learning experiment with the assistive control task of the robot arm is conducted. As a result, proper assistive strategies that can achieve the robot control task and reduce EMG signals of the user are acquired only by 30 seconds interactions.
引用
收藏
页码:3346 / 3351
页数:6
相关论文
共 50 条
  • [1] Learning assistive strategies for exoskeleton robots from user-robot physical interaction
    Hamaya, Masashi
    Matsubara, Takamitsu
    Noda, Tomoyuki
    Teramae, Tatsuya
    Morimoto, Jun
    PATTERN RECOGNITION LETTERS, 2017, 99 : 67 - 76
  • [2] Model-Based Reinforcement Learning For Robot Control
    Li, Xiang
    Shang, Weiwei
    Cong, Shuang
    2020 5TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2020), 2020, : 300 - 305
  • [3] Safe Robot Execution in Model-Based Reinforcement Learning
    Martinez, David
    Alenya, Guillem
    Torras, Carme
    2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 6422 - 6427
  • [4] A Contraction Approach to Model-based Reinforcement Learning
    Fan, Ting-Han
    Ramadge, Peter J.
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 325 - +
  • [5] An Efficient Approach to Model-Based Hierarchical Reinforcement Learning
    Li, Zhuoru
    Narayan, Akshay
    Leong, Tze-Yun
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3583 - 3589
  • [6] A Model-based Factored Bayesian Reinforcement Learning Approach
    Wu, Bo
    Feng, Yanpeng
    Zheng, Hongyan
    APPLIED SCIENCE, MATERIALS SCIENCE AND INFORMATION TECHNOLOGIES IN INDUSTRY, 2014, 513-517 : 1092 - 1095
  • [7] User-guided reinforcement learning of robot assistive tasks for an intelligent environment
    Wang, Y
    Huber, M
    Papudesi, VN
    Cook, DJ
    IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2003, : 424 - 429
  • [8] EEG-based classification of learning strategies : model-based and model-free reinforcement learning
    Kim, Dongjae
    Weston, Charles
    Lee, Sang Wan
    2018 6TH INTERNATIONAL CONFERENCE ON BRAIN-COMPUTER INTERFACE (BCI), 2018, : 146 - 148
  • [9] DATA-EFFICIENT MODEL-BASED REINFORCEMENT LEARNING FOR ROBOT CONTROL
    Sun, Ming
    Gao, Yue
    Liu, Wei
    Li, Shaoyuan
    INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2021, 36 (04): : 211 - 218