Toward Force Estimation in Robot-Assisted Surgery using Deep Learning with Vision and Robot State

被引:25
作者
Chua, Zonghe [1 ]
Jarc, Anthony M. [2 ]
Okamura, Allison M. [1 ]
机构
[1] Stanford Univ, Dept Mech Engn, Stanford, CA 94305 USA
[2] Intuit Surg Inc, Sunnyvale, CA 94086 USA
来源
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021) | 2021年
关键词
FEEDBACK; VALIDATION; SKILLS;
D O I
10.1109/ICRA48506.2021.9560945
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Knowledge of interaction forces during teleoperated robot-assisted surgery could be used to enable force feedback to users and evaluate tissue handling skill. However, direct force sensing at the end-effector is challenging because it requires biocompatible, sterilizable, and cost-effective sensors. Vision-based neural networks are a promising approach for providing useful force estimates, though questions remain about generalization to new scenarios and real-time inference. We present a force estimation neural network that uses RGB images and robot state as inputs. Using a self-collected dataset, we compared the network to variants that included only a single input type, and evaluated how they generalized to new viewpoints, workspace positions, materials, and tools. We found that the vision-only network was sensitive to shifts in viewpoints, while networks with state inputs were sensitive to vertical shifts in workspace. The network with both state and vision inputs had the highest accuracy for an unseen tool, while the state-only network was most accurate for an unseen material. Through feature removal studies, we found that using only force features produced better accuracy than using only kinematic features as input. The network with both state and vision inputs outperformed a physics-based model in accuracy for seen material. It showed comparable accuracy but faster computation times than a recurrent neural network, making it better suited for real-time applications.
引用
收藏
页码:12335 / 12341
页数:7
相关论文
共 31 条
  • [1] Abadi Martin, 2016, arXiv
  • [2] Abbas A. I. A., 2018, ANN PANCREATIC CANC, V1, P9
  • [3] Aviles Angelica I., 2014, 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA). Proceedings, P1, DOI 10.1109/IPTA.2014.7001941
  • [4] Towards Retrieving Force Feedback in Robotic-Assisted Surgery: A Supervised Neuro-Recurrent-Vision Approach
    Aviles, Angelica I.
    Alsaleh, Samar M.
    Hahn, James K.
    Casals, Alicia
    [J]. IEEE TRANSACTIONS ON HAPTICS, 2017, 10 (03) : 431 - 443
  • [5] Using Contact Forces and Robot Arm Accelerations to Automatically Rate Surgeon Skill at Peg Transfer
    Brown, Jeremy D.
    O'Brien, Conor E.
    Leung, Sarah C.
    Dumon, Kristoffel R.
    Lee, David I.
    Kuchenbecker, Katherine J.
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2017, 64 (09) : 2263 - 2275
  • [6] Deml B, 2005, 2005 IEEE International Workshop on Haptic Audio Visual Environments and their Applications, P33
  • [7] Haptics in robot-assisted surgery: Challenges and benefits
    Enayati N.
    De Momi E.
    Ferrigno G.
    [J]. 1600, Institute of Electrical and Electronics Engineers Inc., United States (09): : 49 - 65
  • [8] Fontanelli GA, 2017, IEEE INT C INT ROBOT, P1464, DOI 10.1109/IROS.2017.8205948
  • [9] Global Evaluative Assessment of Robotic Skills: Validation of a Clinical Assessment Tool to Measure Robotic Surgical Skills
    Goh, Alvin C.
    Goldfarb, David W.
    Sander, James C.
    Miles, Brian J.
    Dunkin, Brian J.
    [J]. JOURNAL OF UROLOGY, 2012, 187 (01) : 247 - 252
  • [10] Haouchine Nazim, 2018, IEEE Robotics and Automation Letters, V3, P2160, DOI 10.1109/LRA.2018.2810948