Video-Based Deep Learning to Detect Dyssynergic Defecation with 3D High-Definition Anorectal Manometry

被引:6
|
作者
Levy, Joshua J. [1 ,2 ,3 ,6 ]
Navas, Christopher M. [1 ]
Chandra, Joan A. [1 ]
Christensen, Brock C. [3 ,4 ,5 ]
Vaickus, Louis J. [6 ]
Curley, Michael [1 ]
Chey, William D. [7 ]
Baker, Jason R. [8 ]
Shah, Eric D. [1 ]
机构
[1] Dartmouth Hitchcock Hlth, Sect Gastroenterol & Hepatol, One Med Ctr Dr, Lebanon, NH 03756 USA
[2] Geisel Sch Med Dartmouth, Quantitat Biomed Sci, Lebanon, NH USA
[3] Geisel Sch Med Dartmouth, Dept Epidemiol, Lebanon, NH USA
[4] Geisel Sch Med Dartmouth, Dept Pharmacol & Toxicol, Lebanon, NH USA
[5] Geisel Sch Med Dartmouth, Dept Community & Family Med, Lebanon, NH USA
[6] Dartmouth Hitchcock Hlth, Emerging Diagnost & Invest Technol, Dept Pathol & Lab Med, Lebanon, NH USA
[7] Michigan Med, Div Gastroenterol & Hepatol, Ann Arbor, MI USA
[8] Atrium Hlth, Atrium Motil Lab, Div Gastroenterol, Charlotte, NC USA
基金
美国国家卫生研究院;
关键词
Artificial intelligence; Machine learning; Gastrointestinal motility; Anorectal disorders; Artificial neural network; ARTIFICIAL-INTELLIGENCE; UNITED-STATES; CONSTIPATION; BURDEN;
D O I
10.1007/s10620-022-07759-3
中图分类号
R57 [消化系及腹部疾病];
学科分类号
摘要
Background We developed a deep learning algorithm to evaluate defecatory patterns to identify dyssynergic defecation using 3-dimensional high definition anal manometry (3D-HDAM). Aims We developed a 3D-HDAM deep learning algorithm to evaluate for dyssynergia. Methods Spatial-temporal data were extracted from consecutive 3D-HDAM studies performed between 2018 and 2020 at Dartmouth-Hitchcock Health. The technical procedure and gold standard definition of dyssynergia were based on the London consensus, adapted to the needs of 3D-HDAM technology. Three machine learning models were generated: (1) traditional machine learning informed by conventional anorectal function metrics, (2) deep learning, and (3) a hybrid approach. Diagnostic accuracy was evaluated using bootstrap sampling to calculate area-under-the-curve (AUC). To evaluate overfitting, models were validated by adding 502 simulated defecation maneuvers with diagnostic ambiguity. Results 302 3D-HDAM studies representing 1208 simulated defecation maneuvers were included (average age 55.2 years; 80.5% women). The deep learning model had comparable diagnostic accuracy [AUC 0.91 (95% confidence interval 0.89-0.93)] to traditional [AUC 0.93(0.92-0.95)] and hybrid [AUC 0.96(0.94-0.97)] predictive models in training cohorts. However, the deep learning model handled ambiguous tests more cautiously than other models; the deep learning model was more likely to designate an ambiguous test as inconclusive [odds ratio 4.21(2.78-6.38)] versus traditional/hybrid approaches. Conclusions Deep learning is capable of considering complex spatial-temporal information on 3D-HDAM technology. Future studies are needed to evaluate the clinical context of these preliminary findings.
引用
收藏
页码:2015 / 2022
页数:8
相关论文
共 50 条
  • [41] 3D CNN-based Deep Learning Model-based Explanatory Prognostication in Patients with Multiple Myeloma using Whole-body MRI
    Morita, Kento
    Karashima, Shigehiro
    Terao, Toshiki
    Yoshida, Kotaro
    Yamashita, Takeshi
    Yoroidaka, Takeshi
    Tanabe, Mikoto
    Imi, Tatsuya
    Zaimoku, Yoshitaka
    Yoshida, Akiyo
    Maruyama, Hiroyuki
    Iwaki, Noriko
    Aoki, Go
    Kotani, Takeharu
    Murata, Ryoichi
    Miyamoto, Toshihiro
    Machida, Youichi
    Matsue, Kosei
    Nambo, Hidetaka
    Takamatsu, Hiroyuki
    JOURNAL OF MEDICAL SYSTEMS, 2024, 48 (01)
  • [42] Accurate Screening of COVID-19 Using Attention-Based Deep 3D Multiple Instance Learning
    Han, Zhongyi
    Wei, Benzheng
    Hong, Yanfei
    Li, Tianyang
    Cong, Jinyu
    Zhu, Xue
    Wei, Haifeng
    Zhang, Wei
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (08) : 2584 - 2594
  • [43] Contributions of deep learning to automated numerical modelling of the interaction of electric fields and cartilage tissue based on 3D images
    Che, Vien Lam
    Zimmermann, Julius
    Zhou, Yilu
    Lu, X. Lucas
    van Rienen, Ursula
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2023, 11
  • [44] Deep Learning based Hand-Drawn Molecular Structure Recognition and 3D Visualisation using Augmented Reality
    Adhikari, Jayampathi
    Aththanayake, Malith
    Kularathna, Charith
    Wijayasiri, Adeesha
    Munasinghe, Aravinda
    2022 22ND INTERNATIONAL CONFERENCE ON ADVANCES IN ICT FOR EMERGING REGIONS (ICTER), 2022,
  • [45] An End-to-End Deep Learning Network for 3D Object Detection From RGB-D Data Based on Hough Voting
    Yan, Ming
    Li, Zhongtong
    Yu, Xinyan
    Jin, Cong
    IEEE ACCESS, 2020, 8 : 138810 - 138822
  • [46] Deep learning-based instance segmentation on 3D laser triangulation data for inline monitoring of particle size distributions in construction and demolition waste recycling
    Wu, Xiaoye
    Kroell, Nils
    Greiff, Kathrin
    RESOURCES CONSERVATION AND RECYCLING, 2024, 205
  • [47] Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CT-based 3D quantification of skeletal metastases
    Belal, Sarah Lindgren
    Sadik, May
    Kaboteh, Reza
    Enqvist, Olof
    Ulen, Johannes
    Poulsen, Mads H.
    Simonsen, Jane
    Hoilund-Carlsen, Poul F.
    Edenbrandt, Lars
    Tragardh, Elin
    EUROPEAN JOURNAL OF RADIOLOGY, 2019, 113 : 89 - 95
  • [48] Detecting Adverse Pathology of Prostate Cancer With a Deep Learning Approach Based on a 3D Swin-Transformer Model and Biparametric MRI: A Multicenter Retrospective Study
    Zhao, Litao
    Bao, Jie
    Wang, Ximing
    Qiao, Xiaomeng
    Shen, Junkang
    Zhang, Yueyue
    Jin, Pengfei
    Ji, Yanting
    Zhang, Ji
    Su, Yueting
    Ji, Libiao
    Li, Zhenkai
    Lu, Jian
    Hu, Chunhong
    Shen, Hailin
    Tian, Jie
    Liu, Jiangang
    JOURNAL OF MAGNETIC RESONANCE IMAGING, 2024, 59 (06) : 2101 - 2112
  • [49] Real-Time Processing of High-Resolution Video and 3D Model-Based Tracking for Remote Towers
    Barrowclough O.J.D.
    Briseid S.
    Muntingh G.
    Viksand T.
    SN Computer Science, 2020, 1 (5)
  • [50] A photometric stereo-based 3D imaging system using computer vision and deep learning for tracking plant growth
    Bernotas, Gytis
    Scorza, Livia C. T.
    Hansen, Mark F.
    Hales, Ian J.
    Halliday, Karen J.
    Smith, Lyndon N.
    Smith, Melvyn L.
    McCormick, Alistair J.
    GIGASCIENCE, 2019, 8 (05):