Guidewire feeding method based on deep reinforcement learning for vascular intervention robot

被引:1
|
作者
Yang, Deai [1 ]
Song, Jingzhou [1 ]
Hu, Yuhang [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Automat, 10 Xitucheng Rd, Beijing, Peoples R China
来源
PROCEEDINGS OF 2022 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2022) | 2022年
关键词
vascular interventional surgery robot; autonomously guidewire feeding; deep reinforcement learning; guidewire detection;
D O I
10.1109/ICMA54519.2022.9856351
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vascular interventional surgery has the advantages of small trauma, rapid postoperative recovery, and a high safety factor. It is widely used in the treatment of various cardiovascular diseases. In order to reduce the X-ray radiation exposure to doctors, the existing vascular interventional surgery robot is mainly operated by human teleoperation. Currently, how to achieve autonomously intelligent guidewire feeding has become an attractive issue for researchers. This paper proposes an autonomous wire feeding method in interventional surgery based on the deep reinforcement learning framework SAC (Soft Actor-Critic). We adopt the target detection algorithm YOLOv5s to obtain the guidewire tip, which is used for state update of the SAC agent. The guidewire feeding experiments on the vessel phantom verified that the deep reinforcement learning SAC method could well complete the guidewire feeding task in interventional surgery. Further research will utilize the combination of guidewire and guide catheter to solve more complex surgical tasks. The experiment results show that guidewire end detection accuracy can reach at least 92.9% and the guidewire feeding accuracy is less than 5mm.
引用
收藏
页码:1287 / 1293
页数:7
相关论文
共 50 条
  • [31] Learning for a Robot: Deep Reinforcement Learning, Imitation Learning, Transfer Learning
    Hua, Jiang
    Zeng, Liangcai
    Li, Gongfa
    Ju, Zhaojie
    SENSORS, 2021, 21 (04) : 1 - 21
  • [32] Node selection method in federated learning based on deep reinforcement learning
    He W.
    Guo S.
    Qiu X.
    Chen L.
    Zhang S.
    Tongxin Xuebao/Journal on Communications, 2021, 42 (06): : 62 - 71
  • [33] Online Optimization Method of Controller Parameters for Robot Constant Force Grinding Based on Deep Reinforcement Learning Rainbow
    Zhang, Tie
    Yuan, Chao
    Zou, Yanbiao
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 105 (04)
  • [34] Online Optimization Method of Controller Parameters for Robot Constant Force Grinding Based on Deep Reinforcement Learning Rainbow
    Tie Zhang
    Chao Yuan
    Yanbiao Zou
    Journal of Intelligent & Robotic Systems, 2022, 105
  • [35] Path Planning for Mobile Robot Based on Deep Reinforcement Learning and Fuzzy Control
    Liu, Chunling
    Xu, Jun
    Guo, Kaiwen
    2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 533 - 537
  • [36] Robot path planner based on deep reinforcement learning and the seeker optimization algorithm
    Xing, Xiangrui
    Ding, Hongwei
    Liang, Zhuguan
    Li, Bo
    Yang, Zhijun
    MECHATRONICS, 2022, 88
  • [37] Deep Reinforcement Learning Based Collision Avoidance Algorithm for Differential Drive Robot
    Lu, Xinglong
    Cao, Yiwen
    Zhao, Zhonghua
    Yan, Yilin
    INTELLIGENT ROBOTICS AND APPLICATIONS (ICIRA 2018), PT I, 2018, 10984 : 186 - 198
  • [38] SPSD: Semantics and Deep Reinforcement Learning Based Motion Planning for Supermarket Robot
    Cai, Jialun
    Huang, Weibo
    You, Yingxuan
    Chen, Zhan
    Ren, Bin
    Liu, Hong
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2023, E106D (05) : 765 - 772
  • [39] Sensor-based Mobile Robot Navigation via Deep Reinforcement Learning
    Han, Seungho-Ho
    Choi, Ho-Jin
    Benz, Philipp
    Loaiciga, Jorge
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2018, : 147 - 154
  • [40] Gearshifting Strategy for Robot-driven Vehicles Based on Deep Reinforcement Learning
    Zhou N.
    Chen G.
    Qiche Gongcheng/Automotive Engineering, 2020, 42 (11): : 1473 - 1481