Ask to Know More: Generating Counterfactual Explanations for Fake Claims

被引:9
作者
Dai, Shih-Chieh [1 ]
Hsu, Yi-Li [2 ,3 ,6 ]
Xiong, Aiping [4 ]
Ku, Lun-Wei [5 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] Acad Sinica, Inst Informat Sci, Taipei, Taiwan
[3] Natl Tsing Hua Univ, Dept Comp Sci, Hsinchu, Taiwan
[4] Penn State Univ, University Pk, PA 16802 USA
[5] Acad Sinica, Inst Informat, Taipei, Taiwan
[6] Acad Sinica, RA, Taipei, Taiwan
来源
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022 | 2022年
关键词
Fact-checking; XAI; Question-Answering; Counterfactual Explanation; Textual entailment; TRUTH;
D O I
10.1145/3534678.3539205
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated fact-checking systems have been proposed that quickly provide veracity prediction at scale to mitigate the negative influence of fake news on people and on public opinion. However, most studies focus on veracity classifiers of those systems, which merely predict the truthfulness of news articles. We posit that effective fact checking also relies on people's understanding of the predictions. We propose elucidating fact-checking predictions using counterfactual explanations to help people understand why a specific piece of news was identified as fake. In this work, generating counterfactual explanations for fake news involves three steps: asking good questions, finding contradictions, and reasoning appropriately. We frame this research question as contradicted entailment reasoning through question answering (QA). We first ask questions towards the false claim and retrieve potential answers from the relevant evidence documents. Then, we identify the most contradictory answer to the false claim by use of an entailment classifier. Finally, a counterfactual explanation is created using a matched QA pair with three different counterfactual explanation forms. Experiments are conducted on the FEVER dataset for both system and human evaluations. Results suggest that the proposed approach generates the most helpful explanations compared to state-of-the-art methods. Our code and data is publicly available. (1)
引用
收藏
页码:2800 / 2810
页数:11
相关论文
共 3 条
  • [1] Generating Robust Counterfactual Explanations
    Guyomard, Victor
    Fessant, Francoise
    Guyet, Thomas
    Bouadi, Tassadit
    Termier, Alexandre
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT III, 2023, 14171 : 394 - 409
  • [2] DeltaExplainer: A Software Debugging Approach to Generating Counterfactual Explanations
    Shree, Sunny
    Chandrasekaran, Jaganmohan
    Lei, Yu
    Kacker, Raghu N.
    Kuhn, D. Richard
    2022 FOURTH IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING (AITEST 2022), 2022, : 103 - 110
  • [3] MIPCE: Generating Multiple Patches Counterfactual-Changing Explanations for Time Series Classification
    Okumura, Hiroyuki
    Nagao, Tomoharu
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI, 2023, 14259 : 231 - 242