Shared Control of Robot Manipulators With Obstacle Avoidance: A Deep Reinforcement Learning Approach

被引:9
作者
Rubagotti, Matteo [1 ]
Sangiovanni, Bianca
Nurbayeva, Aigerim
Incremona, Gian Paolo [2 ]
Ferrara, Antonella [3 ]
Shintemirov, Almas [4 ]
机构
[1] Nazarbayev Univ, Robot & Mechatron, Astana 010000, Kazakhstan
[2] Politecn Milan, Automat Control, I-20133 Milan, Italy
[3] Univ Pavia, Automat Control, I-27100 Pavia, Italy
[4] Nazarbayev Univ, Robot & Mechatron, Astana Lab Robot & Intelligent Syst, Astana, Kazakhstan
来源
IEEE CONTROL SYSTEMS MAGAZINE | 2023年 / 43卷 / 01期
关键词
Deep learning; Systems operation; Space missions; Training data; Surgery; Reinforcement learning; Manipulators;
D O I
10.1109/MCS.2022.3216653
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The word teleoperation (which, in general, means 'working at a distance') is typically used in robotics when a human operator commands a remote agent. A teleoperated robot is often employed to substitute human beings in conditions where the latter cannot operate. A possible reason is the need to be in contact with dangerous substances, and indeed, the first robot teleoperation system was designed in the 1940s for handling nuclear and chemical materials [1]. Other reasons can be the difficulty in bringing people on missions to explore deep waters or space [2], [3] as well as the need to work with very high precision (for example) during a surgery [4], [5]. In certain cases, the reference provided by the human operator is not directly passed to the robot but is instead used to generate an adaptive motion. This approach is known as semiautonomous teleoperation or shared control [6], and its aim is to reduce the workload of the human operator during the performance of a difficult task that involves controlling a robotic system. © 1991-2012 IEEE.
引用
收藏
页码:44 / 63
页数:20
相关论文
共 58 条
[1]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[2]   Constrained model predictive control for mobile robotic manipulators [J].
Avanzini, Giovanni Buizza ;
Zanchettin, Andrea Maria ;
Rocco, Paolo .
ROBOTICA, 2018, 36 (01) :19-38
[3]  
Boessenkool H, 2013, IEEE T HAPTICS, V6, P2, DOI [10.1109/ToH.2012.22, 10.1109/TOH.2012.22]
[4]   Controlling Ocean One: Human-robot collaboration for deep-sea manipulation [J].
Brantner, Gerald ;
Khatib, Oussama .
JOURNAL OF FIELD ROBOTICS, 2021, 38 (01) :28-51
[5]  
Camacho E. F., 2013, MODEL PREDICTIVE CON, DOI DOI 10.1007/978-0-85729-398-5
[6]   A Bayesian Shared Control Approach for Wheelchair Robot With Brain Machine Interface [J].
Deng, Xiaoyan ;
Yu, Zhu Liang ;
Lin, Canguang ;
Gu, Zhenghui ;
Li, Yuanqing .
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2020, 28 (01) :328-338
[7]  
Ericson C., 2004, REAL TIME COLLISION
[8]  
Fan L., 2018, PROC C ROBOT LEARN, P767
[9]   Implementation of Nonlinear Model Predictive Path-Following Control for an Industrial Robot [J].
Faulwasser, Timm ;
Weber, Tobias ;
Zometa, Pablo ;
Findeisen, Rolf .
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2017, 25 (04) :1505-1511
[10]   qpOASES: a parametric active-set algorithm for quadratic programming [J].
Ferreau, Hans Joachim ;
Kirches, Christian ;
Potschka, Andreas ;
Bock, Hans Georg ;
Diehl, Moritz .
MATHEMATICAL PROGRAMMING COMPUTATION, 2014, 6 (04) :327-363