Vitreoretinal Surgical Instrument Tracking in Three Dimensions Using Deep Learning

被引:6
|
作者
Baldi, Pierre F. [1 ,2 ,3 ,4 ,6 ]
Abdelkarim, Sherif [1 ,2 ]
Liu, Junze [1 ,2 ]
To, Josiah K. [4 ]
Ibarra, Marialejandra Diaz [5 ]
Browne, Andrew W. [3 ,4 ,5 ,6 ]
机构
[1] Univ Calif Irvine, Dept Comp Sci, Irvine, CA USA
[2] Univ Calif Irvine, Inst Genom & Bioinformat, Irvine, CA USA
[3] Univ Calif Irvine, Dept Biomed Engn, Irvine, CA USA
[4] Univ Calif Irvine, Ctr Translat Vis Res, Dept Ophthalmol, Irvine, CA USA
[5] Univ Calif Irvine, Gavin Herbert Eye Inst, Dept Ophthalmol, Irvine, CA USA
[6] Univ Calif Irvine, Dept Comp Sci, 4038 Bren Hall, Irvine, CA 92697 USA
来源
关键词
artificial intelligence; retina surgery; deep learning; VISUAL FUNCTION; MOBILITY TEST; ORIENTATION; VISION; BLIND;
D O I
10.1167/tvst.12.1.20
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Purpose: To evaluate the potential for artificial intelligence-based video analysis to determine surgical instrument characteristics when moving in the three-dimensional vitreous space. Methods: We designed and manufactured a model eye in which we recorded choreographed videos of many surgical instruments moving throughout the eye. We labeled each frame of the videos to describe the surgical tool characteristics: tool type, location, depth, and insertional laterality. We trained two different deep learning models to predict each of the tool characteristics and evaluated model performances on a subset of images. Results: The accuracy of the classification model on the training set is 84% for the x-y region, 97% for depth, 100% for instrument type, and 100% for laterality of insertion. The accuracy of the classification model on the validation dataset is 83% for the x-y region, 96% for depth, 100% for instrument type, and 100% for laterality of insertion. The closeup detection model performs at 67 frames per second, with precision for most instruments higher than 75%, achieving a mean average precision of 79.3%. Conclusions: We demonstrated that trained models can track surgical instrument movement in three-dimensional space and determine instrument depth, tip location, instrument insertional laterality, and instrument type. Model performance is nearly instantaneous and justifies further investigation into application to real-world surgical videos.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Distributed visual positioning for surgical instrument tracking
    Yu Cai
    Mingzhu Zhu
    Bingwei He
    Jianwei Zhang
    Physical and Engineering Sciences in Medicine, 2024, 47 : 273 - 286
  • [22] The Tracking of Reaches in Three-Dimensions
    Wong, Yan T.
    Hagan, Maureen A.
    Markowitz, David A.
    Pesaran, Bijan
    2011 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2011, : 5440 - 5443
  • [23] Tracking sperm in three-dimensions
    Corkidi, G.
    Taboada, B.
    Wood, C. D.
    Guerrero, A.
    Darszon, A.
    BIOCHEMICAL AND BIOPHYSICAL RESEARCH COMMUNICATIONS, 2008, 373 (01) : 125 - 129
  • [24] Front tracking in two and three dimensions
    Glimm, J
    Graham, MJ
    Grove, J
    Li, XL
    Smith, TM
    Tan, D
    Tangerman, F
    Zhang, Q
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 1998, 35 (07) : 1 - 11
  • [25] Tracking tongue motion in three dimensions using tagged MR images
    Liu, Xiaofeng
    Stone, Maureen
    Prince, Jerry L.
    2006 3RD IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING: MACRO TO NANO, VOLS 1-3, 2006, : 1372 - +
  • [26] Deep-Sea Organisms Tracking Using Dehazing and Deep Learning
    Huimin Lu
    Tomoki Uemura
    Dong Wang
    Jihua Zhu
    Zi Huang
    Hyoungseop Kim
    Mobile Networks and Applications, 2020, 25 : 1008 - 1015
  • [27] Deep-Sea Organisms Tracking Using Dehazing and Deep Learning
    Lu, Huimin
    Uemura, Tomoki
    Wang, Dong
    Zhu, Jihua
    Huang, Zi
    Kim, Hyoungseop
    MOBILE NETWORKS & APPLICATIONS, 2020, 25 (03): : 1008 - 1015
  • [28] Vulnerable pedestrian detection and tracking using deep learning
    Song, Hyok
    Choi, In Kyu
    Ko, Min Soo
    Bae, Jinwoo
    Kwak, Sooyoung
    Yoo, Jisang
    2018 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2018, : 178 - 179
  • [29] Tracking Objects using QR Codes and Deep Learning
    Ahmadinia, Ali
    Singh, Atika
    Hamadani, Kambiz
    Jiang, Yuanyuan
    FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION, ICMV 2022, 2023, 12701
  • [30] Automatic Tracking of the ICSI procedure using Deep Learning
    Hicks, S.
    Thambawita, V.
    Storas, A.
    Haugen, T. B.
    Hammer, H. L.
    Halvorsen, P.
    Riegler, M.
    Stensen, M. H.
    HUMAN REPRODUCTION, 2022, 37 : I320 - I320