Task-oriented safety field for robot control in human-robot collaborative assembly based on residual learning

被引:7
作者
Zhu, Cheng [1 ]
Yu, Tian [2 ]
Chang, Qing [1 ,2 ]
机构
[1] Univ Virginia, Syst & Informat Engn, Charlottesville, VA 22904 USA
[2] Univ Virginia, Mech & Aerosp Engn, Charlottesville, VA 22904 USA
基金
美国国家科学基金会;
关键词
Human -Robot Collaboration; Residual Reinforcement Learning; Safety Field; COLLISION-AVOIDANCE; MOTION;
D O I
10.1016/j.eswa.2023.121946
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the development of smart manufacturing, human-robot collaboration (HRC) is seen as the future of manufacturing. In a manufacturing environment with HRC, safety has been raising significant attention since the conventional separation of workspaces between robots and humans is removed. In this paper, a dynamic safety field is constructed that takes into account the collaborative assembly task requirements and the motions of both humans and robots. This safety field guides robot actions, ensuring that the robot carries out the required tasks while maintaining a safe distance from human workers. Based on the task requirements and safety considerations between humans and robots, a robot motion planning and control problem is formulated. To solve this problem, a hybrid RRL control scheme is proposed where a residual reinforcement learning (RRL) method is developed to combine the safety field-based control method with a deep reinforcement learning (DRL) method. Numerical studies are conducted to evaluate the performance of the proposed, which is compared with a pure deep Qnetwork (DQN) based method and a rapidly exploring random tree (RRT) method. The simulation results show that the proposed method can effectively optimize robot trajectories and outperforms the DQN-based and the RRT method in terms of computational efficiency.
引用
收藏
页数:9
相关论文
共 43 条
[1]   Grid-Based Mobile Robot Path Planning Using Aging-Based Ant Colony Optimization Algorithm in Static and Dynamic Environments [J].
Ajeil, Fatin Hassan ;
Ibraheem, Ibraheem Kasim ;
Azar, Ahmad Taher ;
Humaidi, Amjad J. .
SENSORS, 2020, 20 (07)
[2]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[3]  
Botteghi N, 2020, Arxiv, DOI arXiv:2002.04109
[4]   Prediction-Based Path Planning for Safe and Efficient Human-Robot Collaboration in Construction via Deep Reinforcement Learning [J].
Cai, Jiannan ;
Du, Ao ;
Liang, Xiaoyun ;
Li, Shuai .
JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2023, 37 (01)
[5]  
Chou PW, 2017, PR MACH LEARN RES, V70
[6]   Risk Assessment Process for Collaborative Assembly - A Job Safety Analysis Approach [J].
Gopinath, Varun ;
Johansen, Kerstin .
6TH CIRP CONFERENCE ON ASSEMBLY TECHNOLOGIES AND SYSTEMS (CATS), 2016, 44 :199-203
[7]  
Guiochet J., 2008, 6 IARP IEEE RAS EUR
[8]   Exploiting Abstract Symmetries in Reinforcement Learning for Complex Environments [J].
Gupta, Kashish ;
Najjaran, Homayoun .
2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, :3631-3637
[9]   Quantitative safety guarantees for physical human-robot interaction [J].
Heinzmann, J ;
Zelinsky, A .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2003, 22 (7-8) :479-504
[10]   Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving [J].
Hoel, Carl-Johan ;
Driggs-Campbell, Katherine ;
Wolff, Krister ;
Laine, Leo ;
Kochenderfer, Mykel J. .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2020, 5 (02) :294-305