Explainable artificial intelligence for natural language processing: A survey

被引:0
作者
Hassan, Mehedi [1 ]
Nag, Anindya [1 ]
Biswas, Riya [2 ]
Ali, Shahin [3 ,4 ]
Zaman, Sadika
Bairagi, Anupam Kumar [1 ]
Kaushal, Chetna [5 ]
机构
[1] Khulna Univ, Comp Sci & Engn Discipline, Khulna 9208, Bangladesh
[2] Adamas Univ, Comp Sci & Engn, Kolkata, India
[3] Islamic Univ, Dept Biomed Engn, Kushtia 7003, Bangladesh
[4] Khulna Univ Engn & Technol, Dept Biomed Engn, Khulna 9203, Bangladesh
[5] Chitkara Univ, Inst Engn & Technol, Patiala 140401, Punjab, India
关键词
Explainable Artificial Intelligence; Natural language processing; Artificial Intelligence; Machine learning; Interpretability; BLACK-BOX;
D O I
10.1016/j.datak.2025.102470
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, artificial intelligence has gained a lot of momentum and is predicted to surpass expectations across a range of industries. However, explainability is a major challenge due to sub-symbolic techniques like Deep Neural Networks and Ensembles, which were absent during the boom of AI. The practical application of AI in numerous application areas is greatly undermined by this lack of explainability. In order to counter the lack of perception of AI-based systems, Explainable AI (XAI) aims to increase transparency and human comprehension of black-box AI models. Explainable AI (XAI) also strives to promote transparency and human comprehension of black-box AI models. The explainability problem has been approached using a variety of XAI strategies; however, given the complexity of the search space, it may be tricky for ML developers and data scientists to construct XAI applications and choose the optimal XAI algorithms. This paper provides different frameworks, surveys, operations, and explainability methodologies that are currently available for producing reasoning for predictions from Natural Language Processing models in order to aid developers. Additionally, a thorough analysis of current work in explainable NLP and AI is undertaken, providing researchers worldwide with exploration, insight, and idea development opportunities. Finally, the authors highlight gaps in the literature and offer ideas for future research in this area.
引用
收藏
页数:22
相关论文
共 62 条
[41]   XNLP: A Living Survey for XAI Research in Natural Language Processing [J].
Qian, Kun ;
Danilevsky, Marina ;
Katsis, Yannis ;
Kawas, Ban ;
Oduor, Erick ;
Popa, Lucian ;
Li, Yunyao .
26TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES (IUI '21 COMPANION), 2021, :78-80
[42]   "Why Should I Trust You?" Explaining the Predictions of Any Classifier [J].
Ribeiro, Marco Tulio ;
Singh, Sameer ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :1135-1144
[43]  
Rojat T, 2021, Arxiv, DOI arXiv:2104.00950
[44]  
Samek W., 2019, Explainable AI: interpreting, explaining and visualizing deep learning, P193, DOI [DOI 10.1007/978-3-030-28954-6_10, 10.1007/978-3-030-28954-6_10, 10.1007/978-3-030-28954-610]
[45]  
Schrijvers O., 2017, Crossroads
[46]  
Shah DV, 2020, Arxiv, DOI arXiv:1912.11078
[47]  
Shoufan A., 2015, P 2 WORKSH AR NAT LA, P36, DOI DOI 10.18653/V1/W15-3205
[48]  
Sil Riya, 2019, 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), P57, DOI 10.1109/ICCCIS48478.2019.8974479
[49]  
Sil R., 2020, A study on interactive automated agent based response system over legal domain
[50]   An intelligent approach for automated argument based legal text recognition and summarization using machine learning [J].
Sil, Riya ;
Alpana ;
Roy, Abhishek ;
Dasmahapatra, Mili ;
Dhali, Debojit .
JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 41 (05) :5457-5466