Video-based robotic surgical action recognition and skills assessment on porcine models using deep learning

被引:0
|
作者
Hashemi, Nasseh [1 ,2 ,3 ,9 ]
Mose, Matias [4 ]
Ostergaard, Lasse R. [4 ]
Bjerrum, Flemming [5 ,6 ,7 ]
Hashemi, Mostaan [8 ]
Svendsen, Morten B. S. [5 ]
Friis, Mikkel L. [1 ,2 ]
Tolsgaard, Martin G. [2 ,5 ,7 ]
Rasmussen, Sten [1 ]
机构
[1] Aalborg Univ, Dept Clin Med, Aalborg, Denmark
[2] Aalborg Univ Hosp, Nordsim Ctr Skills Training & Simulat, Aalborg, Denmark
[3] Aalborg Univ Hosp, ROCnord Robot Ctr, Aalborg, Denmark
[4] Aalborg Univ, Dept Hlth Sci & Technol, Aalborg, Denmark
[5] Copenhagen Acad Med Educ & Simulat, Ctr Human Resources & Educ, Copenhagen, Denmark
[6] Copenhagen Univ Hosp Amager & Hvidovre, Surg Sect, Gastrounit, Hvidovre, Denmark
[7] Univ Copenhagen, Dept Clin Med, Copenhagen, Denmark
[8] Aalborg Univ, Dept Comp Sci, Aalborg, Denmark
[9] Aalborg Univ Hosp, Dept Urol, Aalborg, Denmark
来源
SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES | 2025年 / 39卷 / 03期
关键词
Robot-assisted surgery; Artificial intelligence; Action regcognition; Skills assessment; SURGERY; CURVES;
D O I
10.1007/s00464-024-11486-3
中图分类号
R61 [外科手术学];
学科分类号
摘要
ObjectivesThis study aimed to develop an automated skills assessment tool for surgical trainees using deep learning.BackgroundOptimal surgical performance in robot-assisted surgery (RAS) is essential for ensuring good surgical outcomes. This requires effective training of new surgeons, which currently relies on supervision and skill assessment by experienced surgeons. Artificial Intelligence (AI) presents an opportunity to augment existing human-based assessments.MethodsWe used a network architecture consisting of a convolutional neural network combined with a long short-term memory (LSTM) layer to create two networks for the extraction and analysis of spatial and temporal features from video recordings of surgical procedures, facilitating action recognition and skill assessment.Results21 participants (16 novices and 5 experienced) performed 16 different intra-abdominal robot-assisted surgical procedures on porcine models. The action recognition network achieved an accuracy of 96.0% in identifying surgical actions. A GradCAM filter was used to enhance the model interpretability. The skill assessment network had an accuracy of 81.3% in classifying novices and experiences. Procedure plots were created to visualize the skill assessment.ConclusionOur study demonstrated that AI can be used to automate surgical action recognition and skill assessment. The use of a porcine model enables effective data collection at different levels of surgical performance, which is normally not available in the clinical setting. Future studies need to test how well AI developed within a porcine setting can be used to detect errors and provide feedback and actionable skills assessment in the clinical setting.
引用
收藏
页码:1709 / 1719
页数:11
相关论文
共 50 条
  • [11] Video-Based Artificial Intelligence in Thoracoscopic Lobectomy for Lung Cancer: Surgical Structures Segmentation and Phase Recognition
    Liang, H.
    Yan, Z.
    Zhang, Y.
    He, J.
    JOURNAL OF THORACIC ONCOLOGY, 2023, 18 (11) : S137 - S138
  • [12] Capturing fine-grained details for video-based automation of suturing skills assessment
    Hung, Andrew J.
    Bao, Richard
    Sunmola, Idris O.
    Huang, De-An
    Nguyen, Jessica H.
    Anandkumar, Anima
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2023, 18 (03) : 545 - 552
  • [13] A Deep Learning-Based Approach to Video-Based Eye Tracking for Human Psychophysics
    Zdarsky, Niklas
    Treue, Stefan
    Esghaei, Moein
    FRONTIERS IN HUMAN NEUROSCIENCE, 2021, 15
  • [14] The development of an eye movement-based deep learning system for laparoscopic surgical skills assessment
    Kuo, R. J.
    Chen, Hung-Jen
    Kuo, Yi-Hung
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [15] Capturing fine-grained details for video-based automation of suturing skills assessment
    Andrew J. Hung
    Richard Bao
    Idris O. Sunmola
    De-An Huang
    Jessica H. Nguyen
    Anima Anandkumar
    International Journal of Computer Assisted Radiology and Surgery, 2023, 18 : 545 - 552
  • [16] Video-Based Abnormal Driving Behavior Detection via Deep Learning Fusions
    Huang, Wei
    Liu, Xi
    Luo, Mingyuan
    Zhang, Peng
    Wang, Wei
    Wang, Jin
    IEEE ACCESS, 2019, 7 : 64571 - 64582
  • [17] Automatic Assessment of Procedural Skills Based on the Surgical Workflow Analysis Derived from Speech and Video
    Guzman-Garcia, Carmen
    Sanchez-Gonzalez, Patricia
    Oropesa, Ignacio
    Gomez, Enrique J.
    BIOENGINEERING-BASEL, 2022, 9 (12):
  • [18] Deep Learning-Based Haptic Guidance for Surgical Skills Transfer
    Fekri, Pedram
    Dargahi, Javad
    Zadeh, Mehrdad
    FRONTIERS IN ROBOTICS AND AI, 2021, 7
  • [19] Developing artificial intelligence models for medical student suturing and knot-tying video-based assessment and coaching
    Madhuri B. Nagaraj
    Babak Namazi
    Ganesh Sankaranarayanan
    Daniel J. Scott
    Surgical Endoscopy, 2023, 37 : 402 - 411
  • [20] Developing artificial intelligence models for medical student suturing and knot-tying video-based assessment and coaching
    Nagaraj, Madhuri B.
    Namazi, Babak
    Sankaranarayanan, Ganesh
    Scott, Daniel J.
    SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, 2023, 37 (01): : 402 - 411