共 172 条
[91]
Sun Y, Wang S, Li Y, Et al., Ernie 2.0: A continual pre-training framework for language understanding, (2019)
[92]
Wang W, Bi B, Yan M, Et al., StructBERT: Incorporating language structures into pre-training for deep language understanding, (2019)
[93]
Clark K, Luong MT, Le QV, Et al., ELECTRA: Pre-training text encoders as discriminators rather than generators, Proc. of the Int'l Conf. on Learning Representations, (2019)
[94]
Goodfellow I, Pouget-Abadie J, Mirza M, Et al., Generative adversarial nets, Advances in Neural Information Processing Systems, pp. 2672-2680, (2014)
[95]
Raffel C, Shazeer N, Roberts A, Et al., Exploring the limits of transfer learning with a unified text-to-text transformer, (2019)
[96]
Lewis M, Liu Y, Goyal N, Et al., Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, (2019)
[97]
Zhu Y, Kiros R, Zemel R, Et al., Aligning books and movies: Towards story-like visual explanations by watching movies and reading books, Proc. of the IEEE Int'l Conf. on Computer Vision, pp. 19-27, (2015)
[98]
Parker R, Graff D, Kong J, Et al., English Gigaword fifth edition, linguistic data consortium, Google Scholar, (2011)
[99]
Callan J, Hoy M, Yoo C, Et al., Clueweb09 data set, (2009)
[100]
Cui Y, Liu T, Che W, Et al., A Span-extraction dataset for Chinese machine reading comprehension, Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int'l Joint Conf. on Natural Language Processing (EMNLP-IJCNLP), pp. 5886-5891, (2019)