Mitigating Word Bias in Zero-shot Prompt-based Classifiers

被引:0
作者
Liusie, Adian [1 ]
Manakul, Potsawee [1 ]
Gales, Mark J. F. [1 ]
机构
[1] Univ Cambridge, ALTA Inst, Dept Engn, Cambridge, England
来源
13TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING AND THE 3RD CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, IJCNLP-AACL 2023 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prompt-based classifiers are an attractive approach for zero-shot classification. However, the precise choice of the prompt template and label words can largely influence performance, with semantically equivalent settings often showing notable performance difference. This discrepancy can be partly attributed to word biases, where the classifier may be biased towards classes. To address this problem, it is possible to optimise classification thresholds on a labelled data set, however, this mitigates some of the advantages of prompt-based classifiers. This paper instead approaches this problem by examining the expected marginal probabilities of the classes. Here, probabilities are reweighted to have a uniform prior over classes, in an unsupervised fashion. Further, we draw a theoretical connection between the class priors and the language models' word prior, and offer the ability to set a threshold in a zero-resource fashion. We show that matching class priors correlates strongly with the oracle upper bound performance and demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.(1)
引用
收藏
页码:327 / 335
页数:9
相关论文
共 19 条
  • [1] Bowman Samuel R., 2015, P 2015 C EMPIRICAL M, DOI DOI 10.18653/V1/D15-1075
  • [2] Brown TB, 2020, ADV NEUR IN, V33
  • [3] Chung HW, 2022, Arxiv, DOI [arXiv:2210.11416, 10.48550/arXiv.2210.11416, DOI 10.48550/ARXIV.2210.11416]
  • [4] Gao TY, 2021, 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, P3816
  • [5] Gardner M, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P1801
  • [6] Holtzman A, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P7038
  • [7] How Can We Know What Language Models Know?
    Jiang, Zhengbao
    Xu, Frank F.
    Araki, Jun
    Neubig, Graham
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2020, 8 : 423 - 438
  • [8] Liusie A., 2022, SHORT PAPERS, V2, P78
  • [9] Maas A., 2011, P 49 ANN M ASS COMP, P142
  • [10] Ouyang L, 2022, ADV NEUR IN