Toward Mission-Critical AI: Interpretable, Actionable, and Resilient AI

被引:0
作者
Linkov, Igor [1 ]
Stoddard, Kelsey [1 ]
Strelzoff, Andrew [2 ]
Galaitsi, S. E. [1 ]
Keisler, Jeffrey [3 ]
Trump, Benjamin D. [1 ]
Kott, Alexander [4 ]
Bielik, Pavol [5 ]
Tsankov, Petar [5 ]
机构
[1] US Army Corps Engineers, Concord, MA 01742 USA
[2] US Army Corps Engineers, Vicksburg, MS USA
[3] Univ Massachusetts Boston, Management Sci & Informat Syst, Dorchester, MA USA
[4] Army Res Lab, Adelphi, MD USA
[5] LatticeFlow, Zurich, Switzerland
来源
2023 15TH INTERNATIONAL CONFERENCE ON CYBER CONFLICT, CYCON | 2023年
关键词
artificial intelligence; trust; mission-critical AI;
D O I
10.23919/CYCON58705.2023.10181349
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial intelligence (AI) is widely used in science and practice. However, its use in mission-critical contexts is limited due to the lack of appropriate methods for establishing confidence and trust in AI's decisions. To bridge this gap, we argue that instead of aiming to achieve Explainable AI, we need to develop Interpretable, Actionable, and Resilient AI (AI3). Our position is that aiming to provide military commanders and decision-makers with an understanding of how AI models make decisions risks constraining AI capabilities to only those reconcilable with human cognition. Instead, complex systems should be designed with features that build trust by bringing decision-analytic perspectives and formal tools into the AI development and application process. AI3 incorporates explicit quantifications and visualizations of user confidence in AI decisions. In doing so, it makes examining and testing of AI predictions possible in order to establish a basis for trust in the systems' decisionmaking and ensure broad benefits from deploying and advancing its computational capabilities. This presentation provides a methodological frame and practical examples of integrating AI into mission-critical use cases and decision-analytical tools.
引用
收藏
页码:181 / 197
页数:17
相关论文
共 22 条
[1]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[2]  
Al-Shaer E., 2019, Autonomous cyber deception
[3]  
Bai T, 2021, PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, P4312
[4]   AI-Driven Zero Touch Network and Service Management in 5G and Beyond: Challenges and Research Directions [J].
Benzaid, Chafika ;
Taleb, Tarik .
IEEE NETWORK, 2020, 34 (02) :186-194
[5]  
Croce F, 2020, PR MACH LEARN RES, V119
[6]  
Dosovitskiy A, 2017, PR MACH LEARN RES, V78
[7]   From closed world discourse to digital utopianism: the changing face of responsible computing at Computer Professionals for Social Responsibility (1981-1992) [J].
Finn, Megan ;
DuPont, Quinn .
INTERNET HISTORIES, 2020, 4 (01) :6-31
[8]  
Jocher G., 2022, YOLOv5 SOTA realtime instance segmentation
[9]  
Kaggle, 2019, CGI PLAN SAT IM
[10]   Phishing Detection: A Literature Survey [J].
Khonji, Mahmoud ;
Iraqi, Youssef ;
Jones, Andrew .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2013, 15 (04) :2091-2121