Development of an Artificial Intelligence-Based Image Recognition System for Time-Sequence Analysis of Tracheal Intubation

被引:2
|
作者
Wu, Yu-Hwa [1 ]
Huang, Kun-Yi [2 ]
Tseng, Alex Chia-Chih [3 ]
机构
[1] Natl Cheng Kung Univ, Natl Cheng Kung Univ Hosp, Coll Med, Dept Anesthesia, Tainan, Taiwan
[2] Southern Taiwan Univ Sci & Technol, Dept Comp Sci & Informat Engn, Tainan, Taiwan
[3] Natl Cheng Kung Univ, Coll Med, Natl Cheng Kung Univ Hosp, Dept Anesthesiol, 138 Sheng Li Rd, Tainan 704, Taiwan
来源
ANESTHESIA AND ANALGESIA | 2024年 / 139卷 / 02期
关键词
CERVICAL-SPINE IMMOBILIZATION; ENDOTRACHEAL INTUBATION; VIDEO LARYNGOSCOPY;
D O I
10.1213/ANE.0000000000006934
中图分类号
R614 [麻醉学];
学科分类号
100217 ;
摘要
BACKGROUND: Total intubation time (TIT) is an objective indicator of tracheal intubation (TI) difficulties. However, large variations in TIT because of diverse initial and end targets make it difficult to compare studies. A video laryngoscope (VLS) can capture images during the TI process. By using artificial intelligence (AI) to detect airway structures, the start and end points can be freely selected, thus eliminating the inconsistencies. Further deconstructing the process and establishing time-sequence analysis may aid in gaining further understanding of the TI process. METHODS: We developed a time-sequencing system for analyzing TI performed using a #3 Macintosh VLS. This system was established and validated on 30 easy TIs performed by specialists and validated using TI videos performed by a postgraduate-year (PGY) physician. Thirty easy intubation videos were selected from a cohort approved by our institutional review board (B-ER-107-088), and 6 targets were labeled: the lip, epiglottis, laryngopharynx, glottic opening, tube tip, and a black line on the endotracheal tube. We used 887 captured images to develop an AI model trained using You Only Look Once, Version 3 (YOLOv3). Seven cut points were selected for phase division. Seven experts selected the cut points. The expert cut points were used to validate the AI-identified cut points and time-sequence data. After the removal of the tube tip and laryngopharynx images, the durations between 5 identical cut points and sequentially identified the durations of 4 intubation phases, as well as TIT. RESULTS: The average and total losses approached 0 within 150 cycles of model training for target identification. The identification rate for all cut points was 92.4% (194 of 210), which increased to 99.4% (179 of 180) after the removal of the tube tip target. The 4 phase durations and TIT calculated by the AI model and those from the expert exhibited strong Pearson correlation (phase I, r = 0.914; phase II, r = 0.868; phase III, r = 0.964; and phase IV, r = 0.949; TIT, r = 0.99; all P < .001). Similar findings were obtained for the PGY's observations (r > 0.95; P < .01). CONCLUSIONS: YOLOv3 is a powerful tool for analyzing images recorded by VLS. By using AI to detect the airway structures, the start and end points can be freely selected, resolving the heterogeneity resulting from the inconsistencies in the TIT cut points across studies. Time-sequence analysis involving the deconstruction of VLS-recorded TI images into several phases should be conducted in further TI research.
引用
收藏
页码:357 / 365
页数:9
相关论文
共 50 条
  • [21] Development of a Cost-Effective Artificial Intelligence-Based Image Processing Sorting Mechanism for Conveyor Belt System
    Luwes, Nicolaas
    Pretorius, Wilhelmus
    2024 16TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION, HSI 2024, 2024,
  • [22] Evaluation of the artificial intelligence-based pattern recognition system for diabetic retinopathy screen in China
    Du, Z.
    Sun, Z.
    Yang, N.
    Wang, F.
    Wen, L.
    Xin, J.
    Kang, H.
    Liu, X.
    DIABETOLOGIA, 2019, 62 : S394 - S394
  • [23] An Artificial Intelligence-Based Smart System for Early Glaucoma Recognition Using OCT Images
    Singh, Law Kumar
    Pooja
    Garg, Hitendra
    Khanna, Munish
    INTERNATIONAL JOURNAL OF E-HEALTH AND MEDICAL COMMUNICATIONS, 2021, 12 (04) : 32 - 59
  • [24] Artificial intelligence-based recognition and modification of workers' unsafe behavior
    Fang W.
    Ding L.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2022, 50 (08): : 131 - 135
  • [25] Artificial Intelligence-Based Underwater Acoustic Target Recognition: A Survey
    Feng, Sheng
    Ma, Shuqing
    Zhu, Xiaoqian
    Yan, Ming
    REMOTE SENSING, 2024, 16 (17)
  • [26] DEVELOPMENT OF AN ARTIFICIAL INTELLIGENCE-BASED DIAGNOSTIC SYSTEM FOR THE DETECTION OF ABNORMAL COLPOSCOPIC FINDINGS
    Ueda, Akihiko
    Yamaguchi, Ken
    Kitamura, Sachiko
    Taki, Mana
    Yamanoi, Koji
    Murakami, Ryusuke
    Hamanishi, Junzo
    Ueda, Masatsugu
    Mandai, Masaki
    INTERNATIONAL JOURNAL OF GYNECOLOGICAL CANCER, 2023, 33 : A6 - A6
  • [27] Computational and artificial intelligence-based methods for antibody development
    Kim, Jisun
    McFee, Matthew
    Fang, Qiao
    Abdin, Osama
    Kim, Philip M.
    TRENDS IN PHARMACOLOGICAL SCIENCES, 2023, 44 (03) : 175 - 189
  • [28] Speech recognition and artificial intelligence based on the development of music system
    Chi, Jinjing
    EXPERT SYSTEMS, 2025, 42 (01)
  • [29] The impact of consistent image capturing for artificial intelligence-based systems
    de Vylder, Jonas
    de Vylder, Jonas
    Diricx, Bart
    Kimpe, Tom
    JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY, 2020, 83 (06) : AB39 - AB39
  • [30] Development of Artificial Intelligence-Based Remote-Sense
    Lee, Donguk
    Ryu, Joo Hyung
    Jou, Hyeong-Tae
    Kwak, Geunho
    KOREAN JOURNAL OF REMOTE SENSING, 2023, 39 (06) : 1577 - 1589