A Survey on Explainable Artificial Intelligence Techniques and Challenges

被引:24
|
作者
Hanif, Ambreen [1 ]
Zhang, Xuyun [1 ]
Wood, Steven [2 ]
机构
[1] Macquarie Univ, Dept Comp, Sydney, NSW, Australia
[2] Prospa, Sydney, NSW, Australia
来源
2021 IEEE 25TH INTERNATIONAL ENTERPRISE DISTRIBUTED OBJECT COMPUTING CONFERENCE WORKSHOPS (EDOCW 2021) | 2021年
关键词
Interpretable Machine Learning; Explainable Artificial Intelligence; Survey; Machine Learning; Knowledge-intensive; Trustworthy;
D O I
10.1109/EDOCW52865.2021.00036
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In the last decade, the world has envisioned tremendous growth in technology with improved accessibility of data, cloud-computing resources, and the evolution of machine learning (ML) algorithms. The intelligent system has achieved significant performance with this growth. The state-of-the-art performance of these algorithms in various domains has increased the popularity of artificial intelligence (AI). However, alongside these achievements, the non-transparency, inscrutability and inability to expound and interpret the majority of the state-of-the-art techniques are considered an ethical issue. These flaws in AI algorithms impede the acceptance of complex ML models in a variety of fields such as medical, banking and finance, security, and education. These shortcomings have prompted many concerns about the security and safety of ML system users. These systems must be transparent, according to the current regulations and policies, in order to meet the right to explanation. Due to a lack of trust in existing ML-based systems, explainable artificial intelligence (XAI)-based methods are gaining popularity. Although neither the domain nor the methods are novel, they are gaining popularity due to their ability to unbox the black box. The explainable AI methods are of varying strengths, and they are capable of providing insights to the system. These insights can be ranging from a single feature explanation to the interpretability of sophisticated ML architecture. In this paper, we present a survey of known techniques in the field of XAI. Moreover, we suggest future research routes for developing AI systems that can be responsible. We emphasize the necessity of human knowledge-oriented systems for adopting AI in real-world applications with trust and high fidelity.
引用
收藏
页码:81 / 89
页数:9
相关论文
共 50 条
  • [41] The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
    Andrés Páez
    Minds and Machines, 2019, 29 : 441 - 459
  • [42] Statistical arbitrage powered by Explainable Artificial Intelligence
    Carta, Salvatore
    Consoli, Sergio
    Podda, Alessandro Sebastian
    Recupero, Diego Reforgiato
    Stanciu, Maria Madalina
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 206
  • [43] Explainable Artificial Intelligence for Cybersecurity
    Sharma, Deepak Kumar
    Mishra, Jahanavi
    Singh, Aeshit
    Govil, Raghav
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 103
  • [44] A Review of Explainable Artificial Intelligence
    Lin, Kuo-Yi
    Liu, Yuguang
    Li, Li
    Dou, Runliang
    ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: ARTIFICIAL INTELLIGENCE FOR SUSTAINABLE AND RESILIENT PRODUCTION SYSTEMS, APMS 2021, PT IV, 2021, 633 : 574 - 584
  • [45] Prediction of disease comorbidity using explainable artificial intelligence and machine learning techniques: A systematic review
    Alsaleh, Mohanad M.
    Allery, Freya
    Choi, Jung Won
    Hama, Tuankasfee
    McQuillin, Andrew
    Wu, Honghan
    Thygesen, Johan H.
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2023, 175
  • [46] Exploring cross-national divide in government adoption of artificial intelligence: Insights from explainable artificial intelligence techniques
    Wang, Shangrui
    Xiao, Yiming
    Liang, Zheng
    TELEMATICS AND INFORMATICS, 2024, 90
  • [47] Explainable Artificial Intelligence: Importance, Use Domains, Stages, Output Shapes, and Challenges
    Ullah, Naeem
    Khan, Javed Ali
    De Falco, Ivanoe
    Sannino, Giovanna
    ACM COMPUTING SURVEYS, 2025, 57 (04)
  • [48] Toward Explainable and Interpretable Building Energy Modelling: An Explainable Artificial Intelligence Approach
    Zhang, Wei
    Liu, Fang
    Wen, Yonggang
    Nee, Bernard
    BUILDSYS'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, 2021, : 255 - 258
  • [49] Predicting coronavirus disease 2019 severity using explainable artificial intelligence techniques
    Ozawa, Takuya
    Chubachi, Shotaro
    Namkoong, Ho
    Nemoto, Shota
    Ikegami, Ryo
    Asakura, Takanori
    Tanaka, Hiromu
    Lee, Ho
    Fukushima, Takahiro
    Azekawa, Shuhei
    Otake, Shiro
    Nakagawara, Kensuke
    Watase, Mayuko
    Masaki, Katsunori
    Kamata, Hirofumi
    Harada, Norihiro
    Ueda, Tetsuya
    Ueda, Soichiro
    Ishiguro, Takashi
    Arimura, Ken
    Saito, Fukuki
    Yoshiyama, Takashi
    Nakano, Yasushi
    Muto, Yoshikazu
    Suzuki, Yusuke
    Edahiro, Ryuya
    Murakami, Koji
    Sato, Yasunori
    Okada, Yukinori
    Koike, Ryuji
    Ishii, Makoto
    Hasegawa, Naoki
    Kitagawa, Yuko
    Tokunaga, Katsushi
    Kimura, Akinori
    Miyano, Satoru
    Ogawa, Seishi
    Kanai, Takanori
    Fukunaga, Koichi
    Imoto, Seiya
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [50] Bankruptcy prediction: Integration of convolutional neural networks and explainable artificial intelligence techniques
    Lin, Yu-Cheng
    Padliansyah, Roni
    Lu, Yu-Hsin
    Liu, Wen-Rang
    INTERNATIONAL JOURNAL OF ACCOUNTING INFORMATION SYSTEMS, 2025, 56