Guidewire feeding method based on deep reinforcement learning for vascular intervention robot

被引:1
|
作者
Yang, Deai [1 ]
Song, Jingzhou [1 ]
Hu, Yuhang [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Automat, 10 Xitucheng Rd, Beijing, Peoples R China
来源
PROCEEDINGS OF 2022 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2022) | 2022年
关键词
vascular interventional surgery robot; autonomously guidewire feeding; deep reinforcement learning; guidewire detection;
D O I
10.1109/ICMA54519.2022.9856351
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vascular interventional surgery has the advantages of small trauma, rapid postoperative recovery, and a high safety factor. It is widely used in the treatment of various cardiovascular diseases. In order to reduce the X-ray radiation exposure to doctors, the existing vascular interventional surgery robot is mainly operated by human teleoperation. Currently, how to achieve autonomously intelligent guidewire feeding has become an attractive issue for researchers. This paper proposes an autonomous wire feeding method in interventional surgery based on the deep reinforcement learning framework SAC (Soft Actor-Critic). We adopt the target detection algorithm YOLOv5s to obtain the guidewire tip, which is used for state update of the SAC agent. The guidewire feeding experiments on the vessel phantom verified that the deep reinforcement learning SAC method could well complete the guidewire feeding task in interventional surgery. Further research will utilize the combination of guidewire and guide catheter to solve more complex surgical tasks. The experiment results show that guidewire end detection accuracy can reach at least 92.9% and the guidewire feeding accuracy is less than 5mm.
引用
收藏
页码:1287 / 1293
页数:7
相关论文
共 50 条
  • [1] A novel mobile robot navigation method based on deep reinforcement learning
    Quan, Hao
    Li, Yansheng
    Zhang, Yi
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2020, 17 (03):
  • [2] Navigation method for mobile robot based on hierarchical deep reinforcement learning
    Wang T.
    Li A.
    Song H.-L.
    Liu W.
    Wang M.-H.
    Kongzhi yu Juece/Control and Decision, 2022, 37 (11): : 2799 - 2807
  • [3] Research on Robot Intelligent Control Method Based on Deep Reinforcement Learning
    Rao, Shu
    2022 6TH INTERNATIONAL SYMPOSIUM ON COMPUTER SCIENCE AND INTELLIGENT CONTROL, ISCSIC, 2022, : 221 - 225
  • [4] Improved Robot Path Planning Method Based on Deep Reinforcement Learning
    Han, Huiyan
    Wang, Jiaqi
    Kuang, Liqun
    Han, Xie
    Xue, Hongxin
    SENSORS, 2023, 23 (12)
  • [5] Mobile Robot Path Planning Method Based on Deep Reinforcement Learning Algorithm
    Meng, Haitao
    Zhang, Hengrui
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (15)
  • [6] A Behavior-Based Mobile Robot Navigation Method with Deep Reinforcement Learning
    Li, Juncheng
    Ran, Maopeng
    Wang, Han
    Xie, Lihua
    UNMANNED SYSTEMS, 2021, 9 (03) : 201 - 209
  • [7] Robot Search Path Planning Method Based on Prioritized Deep Reinforcement Learning
    Yanglong Liu
    Zuguo Chen
    Yonggang Li
    Ming Lu
    Chaoyang Chen
    Xuzhuo Zhang
    International Journal of Control, Automation and Systems, 2022, 20 : 2669 - 2680
  • [8] Robot Search Path Planning Method Based on Prioritized Deep Reinforcement Learning
    Liu, Yanglong
    Chen, Zuguo
    Li, Yonggang
    Lu, Ming
    Chen, Chaoyang
    Zhang, Xuzhuo
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2022, 20 (08) : 2669 - 2680
  • [9] A Disturbance Rejection Control Method Based on Deep Reinforcement Learning for a Biped Robot
    Liu, Chuzhao
    Gao, Junyao
    Tian, Dingkui
    Zhang, Xuefeng
    Liu, Huaxin
    Meng, Libo
    APPLIED SCIENCES-BASEL, 2021, 11 (04): : 1 - 17
  • [10] Robot path planning based on deep reinforcement learning
    Long, Yinxin
    He, Huajin
    2020 IEEE CONFERENCE ON TELECOMMUNICATIONS, OPTICS AND COMPUTER SCIENCE (TOCS), 2020, : 151 - 154