Availability and transparency of artificial intelligence models in radiology: a meta-research study

被引:0
作者
Lee, Taehee [1 ]
Lee, Jong Hyuk [1 ,2 ]
Yoon, Soon Ho [1 ,2 ]
Park, Seong Ho [3 ,4 ]
Kim, Hyungjin [1 ,2 ]
机构
[1] Seoul Natl Univ Hosp, Dept Radiol, Seoul, South Korea
[2] Seoul Natl Univ, Dept Radiol, Coll Med, Seoul, South Korea
[3] Univ Ulsan, Asan Med Ctr, Dept Radiol, Coll Med, Seoul, South Korea
[4] Univ Ulsan, Res Inst Radiol, Asan Med Ctr, Coll Med, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Artificial intelligence; Machine learning; Replicability; Reproducibility; Model availability;
D O I
10.1007/s00330-025-11492-6
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Objectives This meta-research study explored the availability of artificial intelligence (AI) models from development studies published in leading radiology journals in 2022, with availability defined as the transparent reporting of relevant technical details, such as model architecture and weights, necessary for independent replication. Materials and methods A systematic search of Ovid Medline and Embase was conducted to identify AI model development studies published in five leading radiology journals in 2022. Data were extracted on study characteristics, model details, and code and model-sharing practices. The proportion of AI studies sharing their models was analyzed. Logistic regression analyses were employed to explore associations between study characteristics and model availability. Results Of 268 studies reviewed, 39.9% (n = 107) made their models available. Deep learning (DL) models exhibited particularly low availability, with only 11.5% (n = 13) of the 113 studies being fully available. Training codes for DL models were provided in 22.1% (n = 25), suggesting limited ability to train DL models with one's own data. Multivariable logistic regression analysis showed that the use of traditional regression-based models (odds ratio [OR], 17.11; 95% CI: 5.52, 53.05; p < 0.001) was associated with higher availability, while the radiomics package usage (OR, 0.27; 95% CI: 0.11, 0.65; p = 0.003) was associated with lower availability. Conclusion The availability of AI models in radiology publications remains suboptimal, especially for DL models. Enforcing model-sharing policies, enhancing external validation platforms, addressing commercial restrictions, and providing demos for commercial models in open repositories are necessary to improve transparency and replicability in radiology AI research.
引用
收藏
页数:12
相关论文
共 19 条
  • [1] Evidence of a cognitive bias in the quantification of COVID-19 with CT: an artificial intelligence randomised clinical trial
    Bercean, Bogdan A.
    Birhala, Andreea
    Ardelean, Paula G.
    Barbulescu, Ioana
    Benta, Marius M.
    Rasadean, Cristina D.
    Costachescu, Dan
    Avramescu, Cristian
    Tenescu, Andrei
    Iarca, Stefan
    Buburuzan, Alexandru S.
    Marcu, Marius
    Birsasteanu, Florin
    [J]. SCIENTIFIC REPORTS, 2023, 13 (01)
  • [2] Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers-From the Radiology Editorial Board
    Bluemke, David A.
    Moy, Linda
    Bredella, Miriam A.
    Ertl-Wagner, Birgit B.
    Fowler, Kathryn J.
    Goh, Vicky J.
    Halpern, Elkan F.
    Hess, Christopher P.
    Schiebler, Mark L.
    Weiss, Clifford R.
    [J]. RADIOLOGY, 2020, 294 (03) : 487 - 489
  • [3] MAIC-10 brief quality checklist for publications using artificial intelligence and medical images
    Cerda-Alberich, Leonor
    Solana, Jimena
    Mallol, Pedro
    Ribas, Gloria
    Garcia-Junco, Miguel
    Alberich-Bayarri, Angel
    Marti-Bonmati, Luis
    [J]. INSIGHTS INTO IMAGING, 2023, 14 (01)
  • [4] Towards reproducible radiomics research: introduction of a database for radiomics studies
    D'Antonoli, Tugba Akinci
    Cuocolo, Renato
    Baessler, Bettina
    dos Santos, Daniel Pinto
    [J]. EUROPEAN RADIOLOGY, 2024, 34 (01) : 436 - 443
  • [5] Reproducibility of artificial intelligence models in computed tomography of the head: a quantitative analysis
    Gunzer, Felix
    Jantscher, Michael
    Hassler, Eva M.
    Kau, Thomas
    Reishofer, Gernot
    [J]. INSIGHTS INTO IMAGING, 2022, 13 (01)
  • [6] Kelly BS, 2022, EUR RADIOL, V32, P7998, DOI 10.1007/s00330-022-08784-6
  • [7] Performing a Research Study Using Open-Source Deep Learning Models
    Kim, Hyungjin
    [J]. KOREAN JOURNAL OF RADIOLOGY, 2024, 25 (03) : 217 - 219
  • [8] CheckList for EvaluAtion of Radiomics research (CLEAR): a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII
    Kocak, Burak
    Baessler, Bettina
    Bakas, Spyridon
    Cuocolo, Renato
    Fedorov, Andrey
    Maier-Hein, Lena
    Mercaldo, Nathaniel
    Mueller, Henning
    Orlhac, Fanny
    Pinto dos Santos, Daniel
    Stanzione, Arnaldo
    Ugga, Lorenzo
    Zwanenburg, Alex
    [J]. INSIGHTS INTO IMAGING, 2023, 14 (01)
  • [9] Transparency in Artificial Intelligence Research: a Systematic Review of Availability Items Related to Open Science in Radiology and Nuclear Medicine
    Kocak, Burak
    Yardimci, Aytul Hande
    Yuzkan, Sabahattin
    Keles, Ali
    Altun, Omer
    Bulut, Elif
    Bayrak, Osman Nuri
    Okumus, Ahmet Arda
    [J]. ACADEMIC RADIOLOGY, 2023, 30 (10) : 2254 - 2266
  • [10] Reproducibility of Deep Learning Algorithms Developed for Medical Imaging Analysis: A Systematic Review
    Moassefi, Mana
    Rouzrokh, Pouria
    Conte, Gian Marco
    Vahdati, Sanaz
    Fu, Tianyuan
    Tahmasebi, Aylin
    Younis, Mira
    Farahani, Keyvan
    Gentili, Amilcare
    Kline, Timothy
    Kitamura, Felipe C.
    Huo, Yuankai
    Kuanar, Shiba
    Younis, Khaled
    Erickson, Bradley J.
    Faghani, Shahriar
    [J]. JOURNAL OF DIGITAL IMAGING, 2023, 36 (05) : 2306 - 2312