Reinforcement Learning Control of Robotic Knee With Human-in-the-Loop by Flexible Policy Iteration

被引:31
作者
Gao, Xiang [1 ]
Si, Jennie [1 ]
Wen, Yue [2 ,3 ]
Li, Minhan [2 ,3 ]
Huang, He [2 ,3 ]
机构
[1] Arizona State Univ, Dept Elect Comp & Energy Engn, Tempe, AZ 85287 USA
[2] North Carolina State Univ, Dept Biomed Engn, Raleigh, NC 27695 USA
[3] Univ North Carolina, Chapel Hill, NC 27599 USA
基金
美国国家科学基金会;
关键词
Robots; Impedance; Tuning; Prosthetics; Knee; Erbium; Legged locomotion; Adaptive optimal control; data- and time-efficient learning; flexible policy iteration (FPI); human-in-the-loop; reinforcement learning (RL); robotic knee; EXPERIENCE REPLAY; IMPEDANCE CONTROL; PROSTHESIS; SYSTEMS; GAME; EXOSKELETON; GO;
D O I
10.1109/TNNLS.2021.3071727
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We are motivated by the real challenges presented in a human-robot system to develop new designs that are efficient at data level and with performance guarantees, such as stability and optimality at system level. Existing approximate/adaptive dynamic programming (ADP) results that consider system performance theoretically are not readily providing practically useful learning control algorithms for this problem, and reinforcement learning (RL) algorithms that address the issue of data efficiency usually do not have performance guarantees for the controlled system. This study fills these important voids by introducing innovative features to the policy iteration algorithm. We introduce flexible policy iteration (FPI), which can flexibly and organically integrate experience replay and supplemental values from prior experience into the RL controller. We show system-level performances, including convergence of the approximate value function, (sub)optimality of the solution, and stability of the system. We demonstrate the effectiveness of the FPI via realistic simulations of the human-robot system. It is noted that the problem we face in this study may be difficult to address by design methods based on classical control theory as it is nearly impossible to obtain a customized mathematical model of a human-robot system either online or offline. The results we have obtained also indicate the great potential of RL control to solving realistic and challenging problems with high-dimensional control inputs.
引用
收藏
页码:5873 / 5887
页数:15
相关论文
共 50 条
[41]   On The Convergence Of Policy Iteration-Based Reinforcement Learning With Monte Carlo Policy Evaluation [J].
Winnicki, Anna ;
Srikant, R. .
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
[42]   Robotic Leg Illusion: System Design and Human-in-the-Loop Evaluation [J].
Penner, Dimitri ;
Abrams, Anna M. H. ;
Overath, Philipp ;
Vogt, Joachim ;
Beckerle, Philipp .
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2019, 49 (04) :372-380
[43]   Human-in-the-Loop Robot Learning for Smart Manufacturing: A Human-Centric Perspective [J].
Chen, Hongpeng ;
Li, Shufei ;
Fan, Junming ;
Duan, Anqing ;
Yang, Chenguang ;
Navarro-Alarcon, David ;
Zheng, Pai .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 :11062-11086
[44]   Reinforcement Learning for POMDP: Partitioned Rollout and Policy Iteration With Application to Autonomous Sequential Repair Problems [J].
Bhattacharya, Sushmita ;
Badyal, Sahil ;
Wheeler, Thomas ;
Gil, Stephanie ;
Bertsekas, Dimitri .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (03) :3967-3974
[45]   Human-in-the-Loop Control Using Euler Angles [J].
Perrusquia, Adolfo ;
Yu, Wen .
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2020, 97 (02) :271-285
[46]   adaPARL: Adaptive Privacy-Aware Reinforcement Learning for Sequential Decision Making Human-in-the-Loop Systems [J].
Taherisadr, Mojtaba ;
Stavroulakis, Stelios Andrew ;
Elmalaki, Salma .
PROCEEDINGS 8TH ACM/IEEE CONFERENCE ON INTERNET OF THINGS DESIGN AND IMPLEMENTATION, IOTDI 2023, 2023, :262-274
[47]   ADAS-RL: Adaptive Vector Scaling Reinforcement Learning For Human-in-the-Loop Lane Departure Warning [J].
Ahadi-Sarkani, Armand ;
Elmalaki, Salma .
CPHS'21: PROCEEDINGS OF THE 2021 THE FIRST ACM INTERNATIONAL WORKSHOP ON CYBER-PHYSICAL-HUMAN SYSTEM DESIGN AND IMPLEMENTATION, 2021, :7-12
[48]   Control of active lower limb prosthesis using human-in-the-loop scheme [J].
Hernandez, Ivan ;
Yu, Wen .
COGENT ENGINEERING, 2022, 9 (01)
[49]   Human-in-the-loop machine learning with applications for population health [J].
Long Chen ;
Jiangtao Wang ;
Bin Guo ;
Liming Chen .
CCF Transactions on Pervasive Computing and Interaction, 2023, 5 :1-12
[50]   Sensing-Aware Deep Reinforcement Learning With HCI-Based Human-in-the-Loop Feedback for Autonomous Nonlinear Drone Mobility Control [J].
Lee, Hyunsoo ;
Park, Soohyun .
IEEE ACCESS, 2024, 12 :1727-1736