Vitreoretinal Surgical Instrument Tracking in Three Dimensions Using Deep Learning

被引:6
|
作者
Baldi, Pierre F. [1 ,2 ,3 ,4 ,6 ]
Abdelkarim, Sherif [1 ,2 ]
Liu, Junze [1 ,2 ]
To, Josiah K. [4 ]
Ibarra, Marialejandra Diaz [5 ]
Browne, Andrew W. [3 ,4 ,5 ,6 ]
机构
[1] Univ Calif Irvine, Dept Comp Sci, Irvine, CA USA
[2] Univ Calif Irvine, Inst Genom & Bioinformat, Irvine, CA USA
[3] Univ Calif Irvine, Dept Biomed Engn, Irvine, CA USA
[4] Univ Calif Irvine, Ctr Translat Vis Res, Dept Ophthalmol, Irvine, CA USA
[5] Univ Calif Irvine, Gavin Herbert Eye Inst, Dept Ophthalmol, Irvine, CA USA
[6] Univ Calif Irvine, Dept Comp Sci, 4038 Bren Hall, Irvine, CA 92697 USA
来源
关键词
artificial intelligence; retina surgery; deep learning; VISUAL FUNCTION; MOBILITY TEST; ORIENTATION; VISION; BLIND;
D O I
10.1167/tvst.12.1.20
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Purpose: To evaluate the potential for artificial intelligence-based video analysis to determine surgical instrument characteristics when moving in the three-dimensional vitreous space. Methods: We designed and manufactured a model eye in which we recorded choreographed videos of many surgical instruments moving throughout the eye. We labeled each frame of the videos to describe the surgical tool characteristics: tool type, location, depth, and insertional laterality. We trained two different deep learning models to predict each of the tool characteristics and evaluated model performances on a subset of images. Results: The accuracy of the classification model on the training set is 84% for the x-y region, 97% for depth, 100% for instrument type, and 100% for laterality of insertion. The accuracy of the classification model on the validation dataset is 83% for the x-y region, 96% for depth, 100% for instrument type, and 100% for laterality of insertion. The closeup detection model performs at 67 frames per second, with precision for most instruments higher than 75%, achieving a mean average precision of 79.3%. Conclusions: We demonstrated that trained models can track surgical instrument movement in three-dimensional space and determine instrument depth, tip location, instrument insertional laterality, and instrument type. Model performance is nearly instantaneous and justifies further investigation into application to real-world surgical videos.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Assessing vitreoretinal surgical training experience by leveraging instrument maneuvers and visual attention with deep learning neural networks
    Nespolo, Rogerio
    Nahass, George R.
    Faraji, Mahtab
    Yi, Darvin
    Leiderman, Yannek Isaac
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2024, 65 (07)
  • [2] Feature Tracking and Segmentation in Real Time via Deep Learning in Vitreoretinal Surgery A Platform for Artificial Intelligence-Mediated Surgical Guidance
    Nespolo, Rogerio Garcia
    Yi, Darvin
    Cole, Emily
    Wang, Daniel
    Warren, Alexis
    Leiderman, Yannek I.
    OPHTHALMOLOGY RETINA, 2023, 7 (03): : 236 - 242
  • [3] Evaluation of Surgical Skills during Robotic Surgery by Deep Learning-Based Multiple Surgical Instrument Tracking in Training and Actual Operations
    Lee, Dongheon
    Yu, Hyeong Won
    Kwon, Hyungju
    Kong, Hyoun-Joong
    Lee, Kyu Eun
    Kim, Hee Chan
    JOURNAL OF CLINICAL MEDICINE, 2020, 9 (06) : 1 - 15
  • [4] A machine learning algorithm for identifying and tracking bacteria in three dimensions using Digital Holographic Microscopy
    Bedrossian, Manuel
    El-Kholy, Marwan
    Neamati, Daniel
    Nadeau, Jay
    AIMS BIOPHYSICS, 2018, 5 (01): : 36 - 49
  • [5] Three-dimensional optical microrobot orientation estimation and tracking using deep learning
    Choudhary, Sunil
    Sadak, Ferhat
    Gerena, Edison
    Haliyo, Sinan
    ROBOTICA, 2024,
  • [6] Deep Learning for Instrument Detection and Assessment of Operative Skill in Surgical Videos
    Lam, Kyle
    Lo, Frank P-W
    An, Yujian
    Darzi, Ara
    Kinross, James M.
    Purkayastha, Sanjay
    Lo, Benny
    IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2022, 4 (04): : 1068 - 1071
  • [7] Manual tracking in three dimensions
    Leigh A. Mrotek
    C.C.A.M. Gielen
    Martha Flanders
    Experimental Brain Research, 2006, 171 : 99 - 115
  • [8] Manual tracking in three dimensions
    Mrotek, Leigh A.
    Gielen, C. C. A. M.
    Flanders, Martha
    EXPERIMENTAL BRAIN RESEARCH, 2006, 171 (01) : 99 - 115
  • [9] Microscale tracking of surgical instrument motion
    Riviere, CN
    Khosla, PK
    MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, MICCAI'99, PROCEEDINGS, 1999, 1679 : 1080 - 1087
  • [10] Musical Instrument Identification Using Deep Learning Approach
    Blaszke, Maciej
    Kostek, Bozena
    SENSORS, 2022, 22 (08)