Video-based robotic surgical action recognition and skills assessment on porcine models using deep learning

被引:0
|
作者
Hashemi, Nasseh [1 ,2 ,3 ,9 ]
Mose, Matias [4 ]
Ostergaard, Lasse R. [4 ]
Bjerrum, Flemming [5 ,6 ,7 ]
Hashemi, Mostaan [8 ]
Svendsen, Morten B. S. [5 ]
Friis, Mikkel L. [1 ,2 ]
Tolsgaard, Martin G. [2 ,5 ,7 ]
Rasmussen, Sten [1 ]
机构
[1] Aalborg Univ, Dept Clin Med, Aalborg, Denmark
[2] Aalborg Univ Hosp, Nordsim Ctr Skills Training & Simulat, Aalborg, Denmark
[3] Aalborg Univ Hosp, ROCnord Robot Ctr, Aalborg, Denmark
[4] Aalborg Univ, Dept Hlth Sci & Technol, Aalborg, Denmark
[5] Copenhagen Acad Med Educ & Simulat, Ctr Human Resources & Educ, Copenhagen, Denmark
[6] Copenhagen Univ Hosp Amager & Hvidovre, Surg Sect, Gastrounit, Hvidovre, Denmark
[7] Univ Copenhagen, Dept Clin Med, Copenhagen, Denmark
[8] Aalborg Univ, Dept Comp Sci, Aalborg, Denmark
[9] Aalborg Univ Hosp, Dept Urol, Aalborg, Denmark
来源
SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES | 2025年 / 39卷 / 03期
关键词
Robot-assisted surgery; Artificial intelligence; Action regcognition; Skills assessment; SURGERY; CURVES;
D O I
10.1007/s00464-024-11486-3
中图分类号
R61 [外科手术学];
学科分类号
摘要
ObjectivesThis study aimed to develop an automated skills assessment tool for surgical trainees using deep learning.BackgroundOptimal surgical performance in robot-assisted surgery (RAS) is essential for ensuring good surgical outcomes. This requires effective training of new surgeons, which currently relies on supervision and skill assessment by experienced surgeons. Artificial Intelligence (AI) presents an opportunity to augment existing human-based assessments.MethodsWe used a network architecture consisting of a convolutional neural network combined with a long short-term memory (LSTM) layer to create two networks for the extraction and analysis of spatial and temporal features from video recordings of surgical procedures, facilitating action recognition and skill assessment.Results21 participants (16 novices and 5 experienced) performed 16 different intra-abdominal robot-assisted surgical procedures on porcine models. The action recognition network achieved an accuracy of 96.0% in identifying surgical actions. A GradCAM filter was used to enhance the model interpretability. The skill assessment network had an accuracy of 81.3% in classifying novices and experiences. Procedure plots were created to visualize the skill assessment.ConclusionOur study demonstrated that AI can be used to automate surgical action recognition and skill assessment. The use of a porcine model enables effective data collection at different levels of surgical performance, which is normally not available in the clinical setting. Future studies need to test how well AI developed within a porcine setting can be used to detect errors and provide feedback and actionable skills assessment in the clinical setting.
引用
收藏
页码:1709 / 1719
页数:11
相关论文
共 50 条
  • [31] Multilevel effective surgical workflow recognition in robotic left lateral sectionectomy with deep learning: experimental research
    Liu, Yanzhe
    Zhao, Shang
    Zhang, Gong
    Zhang, Xiuping
    Hu, Minggen
    Zhang, Xuan
    Li, Chenggang
    Zhou, S. Kevin
    Liu, Rong
    INTERNATIONAL JOURNAL OF SURGERY, 2023, 109 (10) : 2941 - 2952
  • [32] Examining validity evidence for a simulation-based assessment tool for basic robotic surgical skills
    Havemann, Maria Cecilie
    Dalsgaard, Torur
    Sorensen, Jette Led
    Rossaak, Kristin
    Brisling, Steffen
    Mosgaard, Berit Jul
    Hogdall, Claus
    Bjerrum, Flemming
    JOURNAL OF ROBOTIC SURGERY, 2019, 13 (01) : 99 - 106
  • [33] A Video-based Automated Tracking and Analysis System of Plaque Burden in Carotid Artery using Deep Learning: A Comparison with Senior Sonographers
    Gao, Wenjing
    Liu, Mengmeng
    Xu, Jinfeng
    Hong, Shaofu
    Chen, Jiayi
    Cui, Chen
    Shi, Siyuan
    Dong, Yinghui
    Song, Di
    Dong, Fajin
    CURRENT MEDICAL IMAGING, 2024, 20
  • [34] Surgical Phase Recognition of Short Video Shots based on Temporal Modeling of Deep Features
    Loukas, Constantinos
    BIOIMAGING: PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, VOL 2, 2019, : 21 - 29
  • [35] Automated recognition of objects and types of forceps in surgical images using deep learning
    Bamba, Yoshiko
    Ogawa, Shimpei
    Itabashi, Michio
    Kameoka, Shingo
    Okamoto, Takahiro
    Yamamoto, Masakazu
    SCIENTIFIC REPORTS, 2021, 11 (01)
  • [36] Research on Human Action Feature Detection and Recognition Algorithm Based on Deep Learning
    Wu, Zhipan
    Du, Huaying
    MOBILE INFORMATION SYSTEMS, 2022, 2022
  • [37] Video-based emotion sensing and recognition using convolutional neural network basedkinetic gas molecule optimization
    Pranathi, Kasani
    Jagini, Naga Padmaja
    Ramaraj, Satishkumar
    Jeyaraman, Deepa
    ACTA IMEKO, 2022, 11 (02):
  • [38] Using Feature Visualisation for Explaining Deep Learning Models in Visual Speech Recognition
    Santos, Timothy Israel
    Abel, Andrew
    2019 4TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA ANALYTICS (ICBDA 2019), 2019, : 231 - 235
  • [39] Robotic Minimally Invasive Surgical Skill Assessment based on Automated Video-Analysis Motion Studies
    Jun, Seung-Kook
    Narayanan, Madusudanan Sathia
    Agarwal, Priyanshu
    Eddib, Abeer
    Singhal, Pankaj
    Garimella, Sudha
    Krovi, Venkat
    2012 4TH IEEE RAS & EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL ROBOTICS AND BIOMECHATRONICS (BIOROB), 2012, : 25 - 31
  • [40] Validity of video-based general and procedure-specific self-assessment tools for surgical trainees in laparoscopic cholecystectomy
    Balvardi, Saba
    Semsar-Kazerooni, Koorosh
    Kaneva, Pepa
    Mueller, Carmen
    Vassiliou, Melina
    Al Mahroos, Mohammed
    Fiore, Julio F., Jr.
    Schwartzman, Kevin
    Feldman, Liane S.
    SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, 2023, 37 (03): : 2281 - 2289