Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges, and Perspectives

被引:83
|
作者
Rawal A. [1 ]
McCoy J. [1 ]
Rawat D.B. [1 ]
Sadler B.M. [2 ]
Amant R.S. [2 ]
机构
[1] Howard University, Department of Electrical Engineering and Computer Science, Washington, 20059, DC
[2] U.S. Army Research Laboratory, Adelphi, 20783, MD
来源
关键词
Artificial intelligence (AI); explainability; explainable AI (XAI); machine learning (ML); robust AI;
D O I
10.1109/TAI.2021.3133846
中图分类号
学科分类号
摘要
Artificial intelligence (AI) and machine learning (ML) have come a long way from the earlier days of conceptual theories, to being an integral part of today's technological society. Rapid growth of AI/ML and their penetration within a plethora of civilian and military applications, while successful, has also opened new challenges and obstacles. With almost no human involvement required for some of the new decision-making AI/ML systems, there is now a pressing need to gain better insights into how these decisions are made. This has given rise to a new field of AI research, explainable AI (XAI). In this article, we present a survey of XAI characteristics and properties. We provide an indepth review of XAI themes, and describe the different methods for designing and developing XAI systems, both during and post model-development. We include a detailed taxonomy of XAI goals, methods, and evaluation, and sketch the major milestones in XAI research. An overview of XAI for security and cybersecurity of XAI systems is also provided. Open challenges are delineated, and measures for evaluating XAI system robustness are described. © 2020 IEEE.
引用
收藏
页码:852 / 866
页数:14
相关论文
共 50 条
  • [1] Causality for Trustworthy Artificial Intelligence: Status, Challenges and Perspectives
    Rawal, Atul
    Raglin, Adrienne
    Rawat, Danda b.
    Sadler, Brian m.
    Mccoy, James
    ACM COMPUTING SURVEYS, 2025, 57 (06)
  • [2] Recent Advances in Artificial Intelligence and Tactical Autonomy: Current Status, Challenges, and Perspectives
    Hagos, Desta Haileselassie
    Rawat, Danda B.
    SENSORS, 2022, 22 (24)
  • [3] Explainable and Trustworthy Artificial Intelligence
    Alonso-Moral, Jose Maria
    Mencar, Corrado
    Ishibuchi, Hisao
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (01) : 14 - 15
  • [4] Blockchain for explainable and trustworthy artificial intelligence
    Nassar, Mohamed
    Salah, Khaled
    Rehman, Muhammad Habib ur
    Svetinovic, Davor
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 10 (01)
  • [5] Exploring the landscape of trustworthy artificial intelligence: Status and challenges
    Mentzas, Gregoris
    Fikardos, Mattheos
    Lepenioti, Katerina
    Apostolou, Dimitris
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2024, 18 (02): : 837 - 854
  • [6] A Review of Trustworthy and Explainable Artificial Intelligence (XAI)
    Chamola, Vinay
    Hassija, Vikas
    Sulthana, A. Razia
    Ghosh, Debshishu
    Dhingra, Divyansh
    Sikdar, Biplab
    IEEE ACCESS, 2023, 11 : 78994 - 79015
  • [7] An Overview for Trustworthy and Explainable Artificial Intelligence in Healthcare
    Arslanoglu, Kubra (karslanoglu@firat.edu.tr), 1600, Institute of Electrical and Electronics Engineers Inc.
  • [8] Artificial Intelligence in Drug Toxicity Prediction: Recent Advances, Challenges, and Future Perspectives
    Van Tran, Thi Tuyet
    Wibowo, Agung Surya
    Tayara, Hilal
    Chong, Kil To
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2023, 63 (09) : 2628 - 2643
  • [9] Recent Advances in Explainable Artificial Intelligence for Magnetic Resonance Imaging
    Qian, Jinzhao
    Li, Hailong
    Wang, Junqi
    He, Lili
    DIAGNOSTICS, 2023, 13 (09)
  • [10] An Explainable Artificial Intelligence Approach for a Trustworthy Spam Detection
    Ibrahim, Abubakr
    Mejri, Mohamed
    Jaafar, Fehmi
    2023 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2023, : 160 - 167