ExClaim: Explainable Neural Claim Verification Using Rationalization

被引:2
|
作者
Gurrapu, Sai [1 ]
Huang, Lifu [1 ]
Batarseh, Feras A. [2 ]
机构
[1] Virginia Tech, Dept Comp Sci, Blacksburg, VA 24061 USA
[2] Virginia Tech, Dept Biol Syst Engn BSE, Blacksburg, VA USA
关键词
rationalization; NLP assurance; claim verification; XAI;
D O I
10.1109/STC55697.2022.00012
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
With the advent of deep learning, text generation language models have improved dramatically, with text at a similar level as human-written text. This can lead to rampant misinformation because content can now be created cheaply and distributed quickly. Automated claim verification methods exist to validate claims, but they lack foundational data and often use mainstream news as evidence sources that are strongly biased towards a specific agenda. Current claim verification methods use deep neural network models and complex algorithms for a high classification accuracy but it is at the expense of model explainability. The models are black-boxes and their decision-making process and the steps it took to arrive at a final prediction are obfuscated from the user. We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence. Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim and justifies the verdict through a natural language explanation (rationale) to describe the model's decision-making process. ExClaim treats the verdict classification task as a question-answer problem and achieves a performance of 0.93 F1 score. It provides subtasks explanations to also justify the intermediate outcomes. Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes. Ensuring claim verification systems are assured, rational, and explainable is an essential step toward improving Human-AI trust and the accessibility of black-box systems.
引用
收藏
页码:19 / 26
页数:8
相关论文
共 50 条
  • [1] Evidence-Aware Hierarchical Interactive Attention Networks for Explainable Claim Verification
    Wu, Lianwei
    Rao, Yuan
    Yang, Xiong
    Wang, Wanzhen
    Nazir, Ambreen
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 1388 - 1394
  • [2] Rationalization for explainable NLP: a survey
    Gurrapu, Sai
    Kulkarni, Ajay
    Huang, Lifu
    Lourentzou, Ismini
    Batarseh, Feras A.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [3] Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models
    Wang, Haoran
    Shu, Kai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 6288 - 6304
  • [4] DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification
    Wu, Lianwei
    Rao, Yuan
    Zhao, Yongqiang
    Liang, Hao
    Nazir, Ambreen
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 1024 - 1035
  • [5] Accurate and Explainable Recommendation via Review Rationalization
    Pan, Sicheng
    Li, Dongsheng
    Gu, Hansu
    Lu, Tun
    Luo, Xufang
    Gu, Ning
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 3092 - 3101
  • [6] Towards explainable writer verification and identification using vantage writers
    Brink, Axel
    Schomaker, Lambert
    Bulacu, Marius
    ICDAR 2007: NINTH INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION, VOLS I AND II, PROCEEDINGS, 2007, : 824 - 828
  • [7] XNV: Explainable Network Verification
    Li, Fuliang
    Li, Minglong
    Pu, Yunhang
    Zhang, Yuxin
    Wang, Xingwei
    Cao, Jiannong
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (06) : 5097 - 5111
  • [8] STRUCTURAL RULES OF PHOSPHORUS - VERIFICATION AND RATIONALIZATION
    HASER, M
    PHOSPHORUS SULFUR AND SILICON AND THE RELATED ELEMENTS, 1994, 93 (1-4): : 235 - 239
  • [9] A Practical Approach to Identity on Digital Ecosystems Using Claim Verification and Trust
    McLaughlin, Mark
    Malone, Paul
    DIGITAL ECOSYSTEMS, 2010, 67 : 161 - 177
  • [10] Explainable Detection of Microplastics Using Transformer Neural Networks
    Barker, Max
    Willans, Meg
    Pham, Duc-Son
    Krishna, Aneesh
    Hackett, Mark
    AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13728 : 102 - 115