Unsupervised Data Augmentation with Naive Augmentation and without Unlabeled Data

被引:0
作者
Lowell, David [1 ]
Howard, Brian E. [2 ]
Lipton, Zachary C. [3 ]
Wallace, Byron C. [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Sciome LLC, Res Triangle Pk, NC USA
[3] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021) | 2021年
基金
美国国家卫生研究院;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised Data Augmentation (UDA) is a semi-supervised technique that applies a consistency loss to penalize differences between a model's predictions on (a) observed (unlabeled) examples; and (b) corresponding `noised' examples produced via data augmentation. While UDA has gained popularity for text classification, open questions linger over which of its components are important, and how to extend the method to sequence labeling tasks; this paper addresses these questions. Our main contribution is an empirical study of UDA to establish which components of the algorithm confer benefits in NLP. Notably, although prior work has emphasized use of clever augmentation techniques including back-translation, we find that enforcing consistency between predictions assigned to observed and randomly substituted words often yields comparable (or greater) benefits compared to these more complex perturbation models. Furthermore, we find that applying UDA's consistency loss affords meaningful gains without any unlabeled data at all, i.e., in a standard supervised setting. In short, UDA need not be unsupervised to realize much of its noted benefits, and does not require complex data augmentation to be effective.
引用
收藏
页码:4992 / 5001
页数:10
相关论文
共 25 条
  • [1] Agarwal O, 2021, COMPUT LINGUIST, V47, P117, DOI [10.1162/COLI_a_00397, 10.1162/coli_a_00397]
  • [2] Beltagy I, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P3615
  • [3] Bojar O., 2014, P WMT, P12
  • [4] De Meulder F., 2003, P 7 C NATURAL LANGUA, DOI DOI 10.3115/1119176.1119195
  • [5] Devlin J., 2018, PRE
  • [6] DeYoung Jay., 2020, Evidence inference 2.0: More data, better models
  • [7] Edunov S, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P489
  • [8] Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
  • [9] Deep Networks with Stochastic Depth
    Huang, Gao
    Sun, Yu
    Liu, Zhuang
    Sedra, Daniel
    Weinberger, Kilian Q.
    [J]. COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 : 646 - 661
  • [10] Kingma D. P, 2015, 3 INT C LEARN REPR I