Rationalization for explainable NLP: a survey

被引:11
|
作者
Gurrapu, Sai [1 ]
Kulkarni, Ajay [2 ]
Huang, Lifu [1 ]
Lourentzou, Ismini [1 ]
Batarseh, Feras A. [2 ,3 ]
机构
[1] Virginia Tech, Dept Comp Sci, Blacksburg, VA USA
[2] Virginia Tech, Commonwealth Cyber Initiat, Arlington, VA 22203 USA
[3] Virginia Tech, Dept Biol Syst Engn, Blacksburg, VA 24060 USA
来源
关键词
rationalization; explainable NLP; rationales; abstractive rationale; extractive rationale; large language models; natural language generation; Natural Language Processing;
D O I
10.3389/frai.2023.1225093
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007 to 2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] A survey of explainable knowledge tracing
    Bai, Yanhong
    Zhao, Jiabao
    Wei, Tingjiang
    Cai, Qing
    He, Liang
    APPLIED INTELLIGENCE, 2024, 54 (08) : 6483 - 6514
  • [22] An Introductory Survey on Attention Mechanisms in NLP Problems
    Hu, Dichao
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, 2020, 1038 : 432 - 448
  • [23] Resources and components for gujarati NLP systems: a survey
    Desai, Nikita P.
    Dabhi, Vipul K.
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (07) : 5391 - 5409
  • [24] Transformers in the Real World: A Survey on NLP Applications
    Patwardhan, Narendra
    Marrone, Stefano
    Sansone, Carlo
    INFORMATION, 2023, 14 (04)
  • [25] Resources and components for gujarati NLP systems: a survey
    Nikita P. Desai
    Vipul K. Dabhi
    Artificial Intelligence Review, 2022, 55 : 1 - 19
  • [26] Survey on Thai NLP Language Resources and Tools
    Arreerard, Ratchakrit
    Mander, Stephen
    Piao, Scott
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6495 - 6505
  • [27] A survey on improving NLP models with human explanations
    Hartmann, Mareike
    Sonntag, Daniel
    PROCEEDINGS OF THE FIRST WORKSHOP ON LEARNING WITH NATURAL LANGUAGE SUPERVISION (LNLS 2022), 2022, : 40 - 47
  • [28] A Survey of Text Representation and Embedding Techniques in NLP
    Patil, Rajvardhan
    Boit, Sorio
    Gudivada, Venkat
    Nandigam, Jagadeesh
    IEEE ACCESS, 2023, 11 : 36120 - 36146
  • [29] Towards Faithful Model Explanation in NLP: A Survey
    Lyu, Qing
    Apidianaki, Marianna
    Callison-Burch, Chris
    COMPUTATIONAL LINGUISTICS, 2024, 50 (02) : 657 - 723
  • [30] Measure and Improve Robustness in NLP Models: A Survey
    Wang, Xuezhi
    Wang, Haohan
    Yang, Diyi
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 4569 - 4586