A Nationwide Network of Health AI Assurance Laboratories

被引:48
作者
Shah, Nigam H. [1 ,2 ,11 ]
Halamka, John D. [2 ,3 ]
Saria, Suchi [2 ,4 ,5 ,6 ]
Pencina, Michael [2 ,7 ]
Tazbaz, Troy [8 ]
Tripathi, Micky [9 ]
Callahan, Alison [1 ]
Hildahl, Hailey [3 ]
Anderson, Brian [2 ,10 ]
机构
[1] Stanford Med, Palo Alto, CA USA
[2] Coalit Hlth AI, Dover, DE USA
[3] Mayo Clin, Mayo Clin Platform, Rochester, MN USA
[4] Bayesian Hlth, New York, NY USA
[5] Johns Hopkins Univ, Baltimore, MD USA
[6] Johns Hopkins Med, Baltimore, MD USA
[7] Duke Univ, Sch Med, Duke AI Hlth, Durham, NC USA
[8] US FDA, Silver Spring, MD USA
[9] US Off Natl Coordinator Hlth IT, Washington, DC USA
[10] MITRE Corp, Bedford, MA USA
[11] Ctr Biomed Informat Res, 3180 Porter Dr,112B, Palo Alto, CA 94305 USA
来源
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION | 2024年 / 331卷 / 03期
关键词
EARLY WARNING SYSTEM;
D O I
10.1001/jama.2023.26930
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Importance Given the importance of rigorous development and evaluation standards needed of artificial intelligence (AI) models used in health care, nationwide accepted procedures to provide assurance that the use of AI is fair, appropriate, valid, effective, and safe are urgently needed.Observations While there are several efforts to develop standards and best practices to evaluate AI, there is a gap between having such guidance and the application of such guidance to both existing and new AI models being developed. As of now, there is no publicly available, nationwide mechanism that enables objective evaluation and ongoing assessment of the consequences of using health AI models in clinical care settings.Conclusion and Relevance The need to create a public-private partnership to support a nationwide health AI assurance labs network is outlined here. In this network, community best practices could be applied for testing health AI models to produce reports on their performance that can be widely shared for managing the lifecycle of AI models over time and across populations and sites where these models are deployed.
引用
收藏
页码:245 / 249
页数:5
相关论文
共 36 条
[1]   Prospective, multi-site study of patient outcomes after implementation of the TREWS machine learning-based early warning system for sepsis [J].
Adams, Roy ;
Henry, Katharine E. ;
Sridharan, Anirudh ;
Soleimani, Hossein ;
Zhan, Andong ;
Rawat, Nishi ;
Johnson, Lauren ;
Hager, David N. ;
Cosgrove, Sara E. ;
Markowski, Andrew ;
Klein, Eili Y. ;
Chen, Edward S. ;
Saheed, Mustapha O. ;
Henley, Maureen ;
Miranda, Sheila ;
Houston, Katrina ;
Linton, Robert C. ;
Ahluwalia, Anushree R. ;
Wu, Albert W. ;
Saria, Suchi .
NATURE MEDICINE, 2022, 28 (07) :1455-+
[2]  
Angwin Julia, 2023, The New York Times
[3]  
[Anonymous], Growth, Internal Market, Industry, Entrepreneurship and SMEs: Pulp and Paper Industry
[4]  
[Anonymous], CERTIFIED HLTH IT PR
[5]   Mitigating Racial And Ethnic Bias And Advancing Health Equity In Clinical Algorithms: A Scoping Review [J].
Cary Jr, Michael P. ;
Zink, Anna ;
Wei, Sijia ;
Olson, Andrew ;
Yan, Mengying ;
Senior, Rashaud ;
Bessias, Sophia ;
Gadhoumi, Kais ;
Jean-Pierre, Genevieve ;
Wang, Demy ;
Ledbetter, Leila S. ;
Economou-Zavlanos, Nicoleta J. ;
Obermeyer, Ziad ;
Pencina, Michael J. .
HEALTH AFFAIRS, 2023, 42 (10) :1359-1368
[6]  
Celi LA, 2022, PLOS DIGIT HEALTH, V1, DOI 10.1371/journal.pdig.0000022
[7]  
Center for Devices and Radiological Health Food and Drug Administration, 2023, CDRH ISS DRAFT GUID
[8]  
Center for Devices and Radiological Health US Food and Drug Administration, SOFTW MED DEV SAMD
[9]  
Chohlas-Wood A., 2023, PREPRINT
[10]  
Coalition for Health AI (CHAI), 2023, Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare