Video-based robotic surgical action recognition and skills assessment on porcine models using deep learning

被引:0
|
作者
Hashemi, Nasseh [1 ,2 ,3 ,9 ]
Mose, Matias [4 ]
Ostergaard, Lasse R. [4 ]
Bjerrum, Flemming [5 ,6 ,7 ]
Hashemi, Mostaan [8 ]
Svendsen, Morten B. S. [5 ]
Friis, Mikkel L. [1 ,2 ]
Tolsgaard, Martin G. [2 ,5 ,7 ]
Rasmussen, Sten [1 ]
机构
[1] Aalborg Univ, Dept Clin Med, Aalborg, Denmark
[2] Aalborg Univ Hosp, Nordsim Ctr Skills Training & Simulat, Aalborg, Denmark
[3] Aalborg Univ Hosp, ROCnord Robot Ctr, Aalborg, Denmark
[4] Aalborg Univ, Dept Hlth Sci & Technol, Aalborg, Denmark
[5] Copenhagen Acad Med Educ & Simulat, Ctr Human Resources & Educ, Copenhagen, Denmark
[6] Copenhagen Univ Hosp Amager & Hvidovre, Surg Sect, Gastrounit, Hvidovre, Denmark
[7] Univ Copenhagen, Dept Clin Med, Copenhagen, Denmark
[8] Aalborg Univ, Dept Comp Sci, Aalborg, Denmark
[9] Aalborg Univ Hosp, Dept Urol, Aalborg, Denmark
来源
SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES | 2025年 / 39卷 / 03期
关键词
Robot-assisted surgery; Artificial intelligence; Action regcognition; Skills assessment; SURGERY; CURVES;
D O I
10.1007/s00464-024-11486-3
中图分类号
R61 [外科手术学];
学科分类号
摘要
ObjectivesThis study aimed to develop an automated skills assessment tool for surgical trainees using deep learning.BackgroundOptimal surgical performance in robot-assisted surgery (RAS) is essential for ensuring good surgical outcomes. This requires effective training of new surgeons, which currently relies on supervision and skill assessment by experienced surgeons. Artificial Intelligence (AI) presents an opportunity to augment existing human-based assessments.MethodsWe used a network architecture consisting of a convolutional neural network combined with a long short-term memory (LSTM) layer to create two networks for the extraction and analysis of spatial and temporal features from video recordings of surgical procedures, facilitating action recognition and skill assessment.Results21 participants (16 novices and 5 experienced) performed 16 different intra-abdominal robot-assisted surgical procedures on porcine models. The action recognition network achieved an accuracy of 96.0% in identifying surgical actions. A GradCAM filter was used to enhance the model interpretability. The skill assessment network had an accuracy of 81.3% in classifying novices and experiences. Procedure plots were created to visualize the skill assessment.ConclusionOur study demonstrated that AI can be used to automate surgical action recognition and skill assessment. The use of a porcine model enables effective data collection at different levels of surgical performance, which is normally not available in the clinical setting. Future studies need to test how well AI developed within a porcine setting can be used to detect errors and provide feedback and actionable skills assessment in the clinical setting.
引用
收藏
页码:1709 / 1719
页数:11
相关论文
共 50 条
  • [21] Why Deep Surgical Models Fail?: Revisiting Surgical Action Triplet Recognition Through the Lens of Robustness
    Cheng, Yanqi
    Liu, Lihao
    Wang, Shujun
    Jin, Yueming
    Schonlieb, Carola-Bibiane
    Aviles-Rivero, Angelica, I
    TRUSTWORTHY MACHINE LEARNING FOR HEALTHCARE, TML4H 2023, 2023, 13932 : 177 - 189
  • [22] Ongoing Evaluation of Video-Based Assessment of Proctors' Scoring of the Fundamentals of Laparoscopic Surgery Manual Skills Examination
    Rooney, Deborah M.
    Brissman, Inga C.
    Gauger, Paul G.
    Journal of Surgical Education, 2015, 72 (03) : 471 - 476
  • [23] Face Gesture Recognition Using Deep-Learning Models
    Espinel, Andres
    Perez, Noel
    Riofrio, Daniel
    Benitez, Diego
    Flores Moyano, Ricardo
    2021 IEEE COLOMBIAN CONFERENCE ON APPLICATIONS OF COMPUTATIONAL INTELLIGENCE - COLCACI, 2021,
  • [24] Diagnostic Performance of Deep Learning in Video-Based Ultrasonography for Breast Cancer: A Retrospective Multicentre Study
    Chen, Jing
    Huang, Zhibin
    Jiang, Yitao
    Wu, Huaiyu
    Tian, Hongtian
    Cui, Chen
    Shi, Siyuan
    Tang, Shuzhen
    Xu, Jinfeng
    Xu, Dong
    Dong, Fajin
    ULTRASOUND IN MEDICINE AND BIOLOGY, 2024, 50 (05) : 722 - 728
  • [25] A Video-based Automated Tracking and Analysis System of Plaque Burden in Carotid Artery using Deep Learning: A Comparison with Senior Sonographers
    Gao, Wenjing
    Liu, Mengmeng
    Xu, Jinfeng
    Hong, Shaofu
    Chen, Jiayi
    Cui, Chen
    Shi, Siyuan
    Dong, Yinghui
    Song, Di
    Dong, Fajin
    CURRENT MEDICAL IMAGING, 2024, 20
  • [26] Video-based self-assessment enhances laparoscopic skills on a virtual reality simulator: a randomized controlled trial
    Netter, Antoine
    Schmitt, Andy
    Agostini, Aubert
    Crochet, Patrice
    SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, 2021, 35 (12): : 6679 - 6686
  • [27] Deep learning-based surgical phase recognition in laparoscopic cholecystectomy
    Yang, Hye Yeon
    Hong, Seung Soo
    Yoon, Jihun
    Park, Bokyung
    Yoon, Youngno
    Han, Dai Hoon
    Choi, Gi Hong
    Choi, Min-Kook
    Kim, Sung Hyun
    ANNALS OF HEPATO-BILIARY-PANCREATIC SURGERY, 2024, 28 (04) : 466 - 473
  • [28] Video-based self-assessment enhances laparoscopic skills on a virtual reality simulator: a randomized controlled trial
    Antoine Netter
    Andy Schmitt
    Aubert Agostini
    Patrice Crochet
    Surgical Endoscopy, 2021, 35 : 6679 - 6686
  • [29] A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision
    Manakitsa, Nikoleta
    Maraslidis, George S.
    Moysis, Lazaros
    Fragulis, George F.
    TECHNOLOGIES, 2024, 12 (02)
  • [30] Is Online Video-Based Education an Effective Method to Teach Basic Surgical Skills to Students and Surgical Trainees? A Systematic Review and Meta-analysis
    Mao, B. P.
    Teichroeb, M. L.
    Lee, T.
    Wong, G.
    Pang, T.
    Pleass, H.
    JOURNAL OF SURGICAL EDUCATION, 2022, 79 (06) : 1536 - 1545