A Survey on Explainable Artificial Intelligence Techniques and Challenges

被引:24
|
作者
Hanif, Ambreen [1 ]
Zhang, Xuyun [1 ]
Wood, Steven [2 ]
机构
[1] Macquarie Univ, Dept Comp, Sydney, NSW, Australia
[2] Prospa, Sydney, NSW, Australia
来源
2021 IEEE 25TH INTERNATIONAL ENTERPRISE DISTRIBUTED OBJECT COMPUTING CONFERENCE WORKSHOPS (EDOCW 2021) | 2021年
关键词
Interpretable Machine Learning; Explainable Artificial Intelligence; Survey; Machine Learning; Knowledge-intensive; Trustworthy;
D O I
10.1109/EDOCW52865.2021.00036
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In the last decade, the world has envisioned tremendous growth in technology with improved accessibility of data, cloud-computing resources, and the evolution of machine learning (ML) algorithms. The intelligent system has achieved significant performance with this growth. The state-of-the-art performance of these algorithms in various domains has increased the popularity of artificial intelligence (AI). However, alongside these achievements, the non-transparency, inscrutability and inability to expound and interpret the majority of the state-of-the-art techniques are considered an ethical issue. These flaws in AI algorithms impede the acceptance of complex ML models in a variety of fields such as medical, banking and finance, security, and education. These shortcomings have prompted many concerns about the security and safety of ML system users. These systems must be transparent, according to the current regulations and policies, in order to meet the right to explanation. Due to a lack of trust in existing ML-based systems, explainable artificial intelligence (XAI)-based methods are gaining popularity. Although neither the domain nor the methods are novel, they are gaining popularity due to their ability to unbox the black box. The explainable AI methods are of varying strengths, and they are capable of providing insights to the system. These insights can be ranging from a single feature explanation to the interpretability of sophisticated ML architecture. In this paper, we present a survey of known techniques in the field of XAI. Moreover, we suggest future research routes for developing AI systems that can be responsible. We emphasize the necessity of human knowledge-oriented systems for adopting AI in real-world applications with trust and high fidelity.
引用
收藏
页码:81 / 89
页数:9
相关论文
共 50 条
  • [31] Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence
    Kasirzadeh, Atoosa
    PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 14 - 14
  • [32] Using Artificial Intelligence for Space Challenges: A Survey
    Russo, Antonia
    Lax, Gianluca
    APPLIED SCIENCES-BASEL, 2022, 12 (10):
  • [33] Condition Monitoring of Wind Turbine Systems by Explainable Artificial Intelligence Techniques
    Astolfi, Davide
    De Caro, Fabrizio
    Vaccaro, Alfredo
    SENSORS, 2023, 23 (12)
  • [34] Applying Explainable Artificial Intelligence Techniques on Linked Open Government Data
    Kalampokis, Evangelos
    Karamanou, Areti
    Tarabanis, Konstantinos
    ELECTRONIC GOVERNMENT, EGOV 2021, 2021, 12850 : 247 - 258
  • [35] A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence
    Stepin, Ilia
    Alonso, Jose M.
    Catala, Alejandro
    Pereira-Farina, Martin
    IEEE ACCESS, 2021, 9 : 11974 - 12001
  • [36] The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
    Paez, Andres
    MINDS AND MACHINES, 2019, 29 (03) : 441 - 459
  • [37] Explainable artificial intelligence for spectroscopy data: a review
    Contreras, Jhonatan
    Bocklitz, Thomas
    PFLUGERS ARCHIV-EUROPEAN JOURNAL OF PHYSIOLOGY, 2024, : 603 - 615
  • [38] An eXplainable Artificial Intelligence tool for statistical arbitrage
    Carta, Salvatore
    Consoli, Sergio
    Podda, Alessandro Sebastian
    Recupero, Diego Reforgiato
    Stanciu, Maria Madalina
    SOFTWARE IMPACTS, 2022, 14
  • [39] Explainable Artificial Intelligence for Urban Planning: Challenges, Solutions, and Future Trends from a New Perspective
    Tong, Shan
    Li, Shaokang
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (07) : 779 - 787
  • [40] Explainable artificial intelligence in pathology
    Klauschen, Frederick
    Dippel, Jonas
    Keyl, Philipp
    Jurmeister, Philipp
    Bockmayr, Michael
    Mock, Andreas
    Buchstab, Oliver
    Alber, Maximilian
    Ruff, Lukas
    Montavon, Gregoire
    Mueller, Klaus-Robert
    PATHOLOGIE, 2024, 45 (02): : 133 - 139