Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets

被引:0
作者
Wu, Yuxiang [1 ]
Gardner, Matt [2 ]
Stenetorp, Pontus [1 ]
Dasigi, Pradeep [3 ]
机构
[1] UCL, London, England
[2] Microsoft Semant Machines, Berkeley, CA USA
[3] Allen Inst AI, Seattle, WA USA
来源
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS) | 2022年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Natural language processing models often exploit spurious correlations between task-independent features and labels in datasets to perform well only within the distributions they are trained on, while not generalising to different task distributions. We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data. Our approach consists of 1) a method for training data generators to generate high-quality, label-consistent data samples; and 2) a filtering mechanism for removing data points that contribute to spurious correlations, measured in terms of z-statistics. We generate debiased versions of the SNLI and MNLI datasets,1 and we evaluate on a large suite of debiased, out-of-distribution, and adversarial test sets. Results show that models trained on our debiased datasets generalise better than those trained on the original datasets in all settings. On the majority of the datasets, our method outperforms or performs comparably to previous state-of-the-art debiasing strategies, and when combined with an orthogonal technique, productof-experts, it improves further and outperforms previous best results of SNLI-hard and MNLI-hard.
引用
收藏
页码:2660 / 2676
页数:17
相关论文
共 44 条
  • [1] [Anonymous], 2020, FINDINGS ACL
  • [2] Bartolo M, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P8830
  • [3] Belinkov Y, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P877
  • [4] Belinkov Yonatan., 2019, P 8 JOINT C LEX COMP, P256, DOI [DOI 10.18653/V1/S19-1028, 10.18653/v1/S19-1028, 10.18653/v1/s19-1028]
  • [5] Bhargava Prajjwal, 2021, P 2 WORKSHOP INSIGHT, P125
  • [6] Bowman Samuel R., 2021, ABS211008300 CORR
  • [7] Bowman Samuel R., 2015, P 2015 C EMPIRICAL M, V334, P632, DOI DOI 10.18653/V1/D15
  • [8] Chen DQ, 2016, PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, P2358
  • [9] Clark C, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P4069
  • [10] Devlin J., 2019, North American Chapter of the Association for Computational Linguistics, V1, P4171, DOI [DOI 10.48550/ARXIV.1810.04805, DOI 10.18653/V1/N19-1423, 10.48550/ARXIV.1810.04805]