Learning Interpretable Negation Rules via Weak Supervision at Document Level: A Reinforcement Learning Approach

被引:0
|
作者
Prollochs, Nicolas [1 ]
Feuerriegel, Stefan [2 ]
Neumann, Dirk [3 ]
机构
[1] Univ Oxford, Oxford Man Inst, Oxford, England
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Univ Freiburg, Freiburg, Germany
关键词
SPECULATION DETECTION; SENTIMENT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Negation scope detection is widely performed as a supervised learning task which relies upon negation labels at word level. This suffers from two key drawbacks: (1) such granular annotations are costly and (2) highly subjective, since, due to the absence of explicit linguistic resolution rules, human annotators often disagree in the perceived negation scopes. To the best of our knowledge, our work presents the first approach that eliminates the need for word-level negation labels, replacing it instead with document-level sentiment annotations. For this, we present a novel strategy for learning fully interpretable negation rules via weak supervision: we apply reinforcement learning to find a policy that reconstructs negation rules from sentiment predictions at document level. Our experiments demonstrate that our approach for weak supervision can effectively learn negation rules. Furthermore, an out-of-sample evaluation via sentiment analysis reveals consistent improvements (of up to 4.66 %) over both a sentiment analysis with (i) no negation handling and (ii) the use of word-level annotations from humans. Moreover, the inferred negation rules are fully interpretable.
引用
收藏
页码:407 / 413
页数:7
相关论文
共 50 条
  • [1] Weak Supervision for Fake News Detection via Reinforcement Learning
    Wang, Yaqing
    Yang, Weifeng
    Ma, Fenglong
    Jin Xu
    Bin Zhong
    Deng, Qiang
    Gao, Jing
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 516 - 523
  • [2] Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
    Xie, Yuansheng
    Vosoughi, Soroush
    Hassanpour, Saeed
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 5067 - 5074
  • [3] Weak Human Preference Supervision for Deep Reinforcement Learning
    Cao, Zehong
    Wong, KaiChiu
    Lin, Chin-Teng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5369 - 5378
  • [4] Interpretable Enterprise Credit Rating via Reinforcement Learning
    Wang, JingQiu
    Guo, WeiYu
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT I, ICIC 2024, 2024, 14875 : 292 - 303
  • [5] Interpretable Reward Redistribution in Reinforcement Learning: A Causal Approach
    Zhang, Yudi
    Du, Yali
    Huang, Biwei
    Wang, Ziyan
    Wang, Jun
    Fang, Meng
    Pechenizkiy, Mykola
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Automated Feature Document Review via Interpretable Deep Learning
    Ye, Ming
    Chen, Yuanfan
    Zhang, Xin
    He, Jinning
    Cao, Jicheng
    Liu, Dong
    Gao, Jing
    Dai, Hailiang
    Cheng, Shengyu
    2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS, ICSE-COMPANION, 2023, : 351 - 354
  • [7] A survey on interpretable reinforcement learning
    Glanois, Claire
    Weng, Paul
    Zimmer, Matthieu
    Li, Dong
    Yang, Tianpei
    Hao, Jianye
    Liu, Wulong
    MACHINE LEARNING, 2024, 113 (08) : 5847 - 5890
  • [8] Interpretable Control by Reinforcement Learning
    Hein, Daniel
    Limmer, Steffen
    Runkler, Thomas A.
    IFAC PAPERSONLINE, 2020, 53 (02): : 8082 - 8089
  • [9] Programmatically Interpretable Reinforcement Learning
    Verma, Abhinav
    Murali, Vijayaraghavan
    Singh, Rishabh
    Kohli, Pushmeet
    Chaudhuri, Swarat
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [10] Optimal recovery of unsecured debt via interpretable reinforcement learning
    Mark, Michael
    Chehrazi, Naveed
    Liu, Huanxi
    Weber, Thomas A.
    MACHINE LEARNING WITH APPLICATIONS, 2022, 8