Generating Token-Level Explanations for Natural Language Inference

被引:0
|
作者
Thorne, James [1 ]
Vlachos, Andreas [1 ]
Christodoulopoulos, Christos [2 ]
Mittal, Arpit [2 ]
机构
[1] Univ Cambridge, Cambridge, England
[2] Amazon, Cambridge, England
来源
2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1 | 2019年
基金
欧盟地平线“2020”;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The task of Natural Language Inference (NLI) is widely modeled as supervised sentence pair classification. While there has been a lot of work recently on generating explanations of the predictions of classifiers on a single piece of text, there have been no attempts to generate explanations of classifiers operating on pairs of sentences. In this paper, we show that it is possible to generate token-level explanations for NLI without the need for training data explicitly annotated for this purpose. We use a simple LSTM architecture and evaluate both LIME and Anchor explanations for this task. We compare these to a Multiple Instance Learning (MIL) method that uses thresholded attention make token-level predictions. The approach we present in this paper is a novel extension of zero-shot single-sentence tagging to sentence pairs for NLI. We conduct our experiments on the well-studied SNLI dataset that was recently augmented with manually annotation of the tokens that explain the entailment relation. We find that our white-box MIL-based method, while orders of magnitude faster, does not reach the same accuracy as the black-box methods.
引用
收藏
页码:963 / 969
页数:7
相关论文
共 50 条
  • [21] Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation
    Wei, Jingxuan
    Sun, Linzhuang
    Leng, Yichong
    Tan, Xu
    Yu, Bihui
    Guo, Ruifeng
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 6531 - 6540
  • [22] INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations
    Yu, Jialin
    Cristea, Alexandra, I
    Harit, Anoushka
    Sun, Zhongtian
    Aduragba, Olanrewaju Tahir
    Shi, Lei
    Al Moubayed, Noura
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [23] LLaMA-Annotate-Visualizing Token-Level Confidences for LLMs
    Schultheis, Erik
    John, S. T.
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES-RESEARCH TRACK AND DEMO TRACK, PT VIII, ECML PKDD 2024, 2024, 14948 : 424 - 428
  • [24] Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion
    Sun, Hao
    Tan, Xu
    Gan, Jun-Wei
    Liu, Hongzhi
    Zhao, Sheng
    Qin, Tao
    Liu, Tie-Yan
    INTERSPEECH 2019, 2019, : 2115 - 2119
  • [25] Causal Bayes Nets and Token-Causation: Closing the Gap between Token-Level and Type-Level
    Gebharter, Alexander
    Huettemann, Andreas
    ERKENNTNIS, 2023, 90 (1) : 43 - 65
  • [26] Generating knowledge aware explanation for natural language inference
    Yang, Zongbao
    Xu, Yinxin
    Hu, Jinlong
    Dong, Shoubin
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (02)
  • [27] Token-Level Adaptation of LoRA Adapters for Downstream Task Generalization
    Belofsky, Joshua
    PROCEEDINGS OF 2023 6TH ARTIFICIAL INTELLIGENCE AND CLOUD COMPUTING CONFERENCE, AICCC 2023, 2023, : 168 - 172
  • [28] DeepMet: A Reading Comprehension Paradigm for Token-level Metaphor Detection
    Su, Chuandong
    Fukumoto, Fumiyo
    Huang, Xiaoxi
    Li, Jiyi
    Wang, Rongbo
    Chen, Zhiqun
    FIGURATIVE LANGUAGE PROCESSING, 2020, : 30 - 39
  • [29] Causal Bayes Nets and Token-Causation: Closing the Gap between Token-Level and Type-Level
    Gebharter, Alexander
    Huettemann, Andreas
    ERKENNTNIS, 2025, 90 (01) : 43 - 65
  • [30] CoLLAT: On Adding Fine-grained Audio Understanding to Language Models using Token-Level Locked-Language Tuning
    Silva, Amila
    Whitehead, Spencer
    Lengerich, Chris
    Leather, Hugh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,