A First Evaluation of a Multi-Modal Learning System to Control Surgical Assistant Robots via Action Segmentation

被引:13
作者
De Rossi, Giacomo [1 ]
Minelli, Marco [2 ]
Roin, Serena [1 ]
Falezza, Fabio [1 ]
Sozzi, Alessio [3 ]
Ferraguti, Federica [2 ]
Setti, Francesco [1 ]
Bonfe, Marcello [3 ]
Secchi, Cristian [2 ]
Muradore, Riccardo [1 ]
机构
[1] Univ Verona, Dept Comp Sci, I-37134 Verona, Italy
[2] Univ Modena & Reggio Emilia, Dept Engn Sci & Methods, I-42122 Reggio Emilia, Italy
[3] Univ Ferrara, Dept Engn, I-44122 Ferrara, Italy
来源
IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS | 2021年 / 3卷 / 03期
基金
欧盟地平线“2020”;
关键词
Medical robotics; cognitive robotics; R-MIS; action segmentation; model-predictive control;
D O I
10.1109/TMRB.2021.3082210
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This paper presents a cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model-Predictive Controller.
引用
收藏
页码:714 / 724
页数:11
相关论文
共 32 条
  • [1] Ahad M.A.R., 2013, MOTION HIST IMAGES A, P19
  • [2] [Anonymous], 2016, Journal of Medical Robotics Research, DOI DOI 10.1142/S2424905X16500082
  • [3] Bonfê M, 2012, P IEEE RAS-EMBS INT, P56, DOI 10.1109/BioRob.2012.6290700
  • [4] Calli Berk, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P2839, DOI 10.1109/ICRA.2017.7989331
  • [5] Cefalo M., 2018, IFAC-PapersOnLine, V51, P220, DOI [10.1016/j.ifacol.2018.11.545, DOI 10.1016/J.IFACOL.2018.11.545]
  • [6] Cheng LB, 2019, IEEE INT CON AUTO SC, P1774, DOI [10.1109/COASE.2019.8843275, 10.1109/coase.2019.8843275]
  • [7] De Rossi G, 2019, IEEE INT C INT ROBOT, P7827, DOI [10.1109/IROS40897.2019.8967667, 10.1109/iros40897.2019.8967667]
  • [8] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [9] Knowledge transfer for surgical activity prediction
    Dergachyova, Olga
    Morandi, Xavier
    Jannin, Pierre
    [J]. INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2018, 13 (09) : 1409 - 1417
  • [10] Soft Boundary Approach for Unsupervised Gesture Segmentation in Robotic-Assisted Surgery
    Fard, Mahtab Jahanbani
    Ameri, Sattar
    Chinnam, Ratna Babu
    Ellis, R. Darin
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2017, 2 (01): : 171 - 178