A Taxonomy of Critical AI System Characteristics for Use in Proxy System Testing

被引:2
作者
DeFranco, Joanna [1 ]
Kassab, Mohamad [1 ]
Laplante, Phil [1 ]
机构
[1] Engn Penn State, Malvern, PA 19355 USA
来源
2022 IEEE INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING WORKSHOPS (ISSREW 2022) | 2022年
关键词
artificial intelligence; autonomous systems; AI taxonomy; critical systems;
D O I
10.1109/ISSREW55968.2022.00090
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Safety and trust are two of the most important features in a critical system. A critical system is one that must be highly reliable in that it not only completes its mission but causes zero harm to the public. The problem is testing a critical system, especially if it employs artificial intelligence (AI). The challenge is critical AI systems (CAIS) may cause unpredictable events and conditions that cannot be modeled during critical error testing. Proxy systems (non-critical prototype) are needed to test the critical system. We present a five-dimensional CAIS taxonomy and a weighting system to map system characteristics to a testing proxy in order to determine equivalent proxy systems to build and test. Ultimately this CAIS taxonomy and weighting system is a way forward to develop a set of proxy systems to use for critical error testing.
引用
收藏
页码:342 / 346
页数:5
相关论文
共 7 条
[1]  
American Medical Association, CPT APP S AI TAX MED
[2]  
Benjamin M, 2021, What the draft European Union AI regulations means for business
[3]  
Department of the Army, 2014, OP ENV ARM LEARN
[4]  
Hasan M., 2019, TOP 20 AI MACHINE LE
[5]  
Huang H., 2008, 1011I20 NIST SP
[6]   Trusting Digital Twins [J].
Laplante, Phil .
COMPUTER, 2022, 55 (07) :73-77
[7]  
Samoili S., 2020, WATCH DEFINING ARTIF, DOI [10.2760/382730,JRC118163, DOI 10.2760/382730,JRC118163]