Vitreoretinal Surgical Instrument Tracking in Three Dimensions Using Deep Learning

被引:6
|
作者
Baldi, Pierre F. [1 ,2 ,3 ,4 ,6 ]
Abdelkarim, Sherif [1 ,2 ]
Liu, Junze [1 ,2 ]
To, Josiah K. [4 ]
Ibarra, Marialejandra Diaz [5 ]
Browne, Andrew W. [3 ,4 ,5 ,6 ]
机构
[1] Univ Calif Irvine, Dept Comp Sci, Irvine, CA USA
[2] Univ Calif Irvine, Inst Genom & Bioinformat, Irvine, CA USA
[3] Univ Calif Irvine, Dept Biomed Engn, Irvine, CA USA
[4] Univ Calif Irvine, Ctr Translat Vis Res, Dept Ophthalmol, Irvine, CA USA
[5] Univ Calif Irvine, Gavin Herbert Eye Inst, Dept Ophthalmol, Irvine, CA USA
[6] Univ Calif Irvine, Dept Comp Sci, 4038 Bren Hall, Irvine, CA 92697 USA
来源
关键词
artificial intelligence; retina surgery; deep learning; VISUAL FUNCTION; MOBILITY TEST; ORIENTATION; VISION; BLIND;
D O I
10.1167/tvst.12.1.20
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Purpose: To evaluate the potential for artificial intelligence-based video analysis to determine surgical instrument characteristics when moving in the three-dimensional vitreous space. Methods: We designed and manufactured a model eye in which we recorded choreographed videos of many surgical instruments moving throughout the eye. We labeled each frame of the videos to describe the surgical tool characteristics: tool type, location, depth, and insertional laterality. We trained two different deep learning models to predict each of the tool characteristics and evaluated model performances on a subset of images. Results: The accuracy of the classification model on the training set is 84% for the x-y region, 97% for depth, 100% for instrument type, and 100% for laterality of insertion. The accuracy of the classification model on the validation dataset is 83% for the x-y region, 96% for depth, 100% for instrument type, and 100% for laterality of insertion. The closeup detection model performs at 67 frames per second, with precision for most instruments higher than 75%, achieving a mean average precision of 79.3%. Conclusions: We demonstrated that trained models can track surgical instrument movement in three-dimensional space and determine instrument depth, tip location, instrument insertional laterality, and instrument type. Model performance is nearly instantaneous and justifies further investigation into application to real-world surgical videos.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] ASSIST - Aautomated system for surgical instrument and sponge tracking
    Rivera, Nilo
    Mountain, Rosemary
    Assumpcao, Lia
    Williams, Allen A.
    Cooper, A. B.
    Lewis, Douglas L.
    Benson, Richard C.
    Miragliotta, Joseph A.
    Marohn, Mike
    Taylor, Russell H.
    2008 IEEE INTERNATIONAL CONFERENCE ON RFID, 2008, : 297 - +
  • [42] Multi-target Attachment for Surgical Instrument Tracking
    Benjumea, Eberto
    Sierra, Juan S.
    Meza, Jhacson
    Marrugo, Andres G.
    PATTERN RECOGNITION (MCPR 2021), 2021, 12725 : 345 - 354
  • [43] Developing three-dimensional single particle tracking in complicated environment using deep learning neural networks
    Zhong, Yaning
    Zhou, Huiyang
    Wang, Gufeng
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2018, 256
  • [44] Tracking the flight of a spinning football in three dimensions
    Griffiths, I
    Evans, C
    Griffiths, N
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2005, 16 (10) : 2056 - 2065
  • [45] FTV: A sonar for tracking macrozooplankton in three dimensions
    Jaffe, J.S.
    Reuss, E.
    McGehee, D.
    Chandran, G.
    Deep-Sea Research, Part 1: Oceanographic Research Papers, 1995, 42 (08):
  • [46] Investigation of the Effect of Noise on Tracking Objects using Deep Learning
    Eshaghian, Mohammad
    INTERNATIONAL JOURNAL OF NONLINEAR ANALYSIS AND APPLICATIONS, 2020, 11 : 53 - 61
  • [47] Enhancement of instrumented ultrasonic tracking images using deep learning
    Maneas, Efthymios
    Hauptmann, Andreas
    Alles, Erwin J.
    Xia, Wenfeng
    Noimark, Sacha
    David, Anna L.
    Arridge, Simon
    Desjardins, Adrien E.
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2023, 18 (02) : 395 - 399
  • [48] Tracking canopy gaps in mangroves remotely using deep learning
    Lassalle, Guillaume
    de Souza Filho, Carlos Roberto
    REMOTE SENSING IN ECOLOGY AND CONSERVATION, 2022, 8 (06) : 890 - 903
  • [49] Tracking and Identification for Football Video Analysis using Deep Learning
    Rangappa, Shreedhar
    Li, Baihua
    Qian, Ruiling
    THIRTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2020), 2021, 11605
  • [50] Object Detection and Tracking with UAV Data Using Deep Learning
    Micheal, A. Ancy
    Vani, K.
    Sanjeevi, S.
    Lin, Chao-Hung
    JOURNAL OF THE INDIAN SOCIETY OF REMOTE SENSING, 2021, 49 (03) : 463 - 469