Rationalization for explainable NLP: a survey

被引:11
|
作者
Gurrapu, Sai [1 ]
Kulkarni, Ajay [2 ]
Huang, Lifu [1 ]
Lourentzou, Ismini [1 ]
Batarseh, Feras A. [2 ,3 ]
机构
[1] Virginia Tech, Dept Comp Sci, Blacksburg, VA USA
[2] Virginia Tech, Commonwealth Cyber Initiat, Arlington, VA 22203 USA
[3] Virginia Tech, Dept Biol Syst Engn, Blacksburg, VA 24060 USA
来源
关键词
rationalization; explainable NLP; rationales; abstractive rationale; extractive rationale; large language models; natural language generation; Natural Language Processing;
D O I
10.3389/frai.2023.1225093
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007 to 2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] From outputs to insights: a survey of rationalization approaches for explainable text classification
    Guzman, Erick Mendez
    Schlegel, Viktor
    Batista-Navarro, Riza
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [2] EXPLAINABOARD: An Explainable Leaderboard for NLP
    Liu, Pengfei
    Fu, Jinlan
    Xiao, Yang
    Yuan, Weizhe
    Chang, Shuaichen
    Dai, Junqi
    Liu, Yixin
    Ye, Zihuiwen
    Neubig, Graham
    ACL-IJCNLP 2021: THE JOINT CONFERENCE OF THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING: PROCEEDINGS OF THE SYSTEM DEMONSTRATIONS, 2021, : 280 - 289
  • [3] Evaluating Attribution Methods for Explainable NLP with Transformers
    Barticka, Vojtech
    Prazak, Ondrej
    Konopik, Miloslav
    Sido, Jakub
    TEXT, SPEECH, AND DIALOGUE (TSD 2022), 2022, 13502 : 3 - 15
  • [4] Hate and Aggression Analysis in NLP with Explainable AI
    Raman, Shatakshi
    Gupta, Vedika
    Nagrath, Preeti
    Santosh, K. C.
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (15)
  • [5] Accurate and Explainable Recommendation via Review Rationalization
    Pan, Sicheng
    Li, Dongsheng
    Gu, Hansu
    Lu, Tun
    Luo, Xufang
    Gu, Ning
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 3092 - 3101
  • [6] Flexible Instance-Specific Rationalization of NLP Models
    Chrysostomou, George
    Aletras, Nikolaos
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10545 - 10553
  • [7] ExClaim: Explainable Neural Claim Verification Using Rationalization
    Gurrapu, Sai
    Huang, Lifu
    Batarseh, Feras A.
    2022 IEEE 29TH ANNUAL SOFTWARE TECHNOLOGY CONFERENCE (STC 2022), 2022, : 19 - 26
  • [8] Explainable APT Attribution for Malware Using NLP Techniques
    Wang, Qinqin
    Yan, Hanbing
    Han, Zhihui
    2021 IEEE 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2021), 2021, : 70 - 80
  • [9] Towards Explainable NLP: A Generative Explanation Framework for Text Classification
    Liu, Hui
    Yin, Qingyu
    Wang, William Yang
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5570 - 5581
  • [10] Coding energy knowledge in constructed responses with explainable NLP models
    Gombert, Sebastian
    Di Mitri, Daniele
    Karademir, Onur
    Kubsch, Marcus
    Kolbe, Hannah
    Tautz, Simon
    Grimm, Adrian
    Bohm, Isabell
    Neumann, Knut
    Drachsler, Hendrik
    JOURNAL OF COMPUTER ASSISTED LEARNING, 2023, 39 (03) : 767 - 786