共 34 条
- [1] Brown T, Mann B, Ryder N, Et al., Language Models are Few-Shot Learners, Proceedings of the 34th International Conference on Neural Information Processing Systems, 33, pp. 1877-1901, (2020)
- [2] Thoppilan R, De Freitas D, Hall J, Et al., LaMDA: Language Models for Dialog Applications [OL]
- [3] Wang S H, Sun Y, Xiang Y, Et al., ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
- [4] Zeng W, Ren X, Su T, Et al., PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-Parallel Computation
- [5] Su H, Zhou X, Yu H J, Et al., WeLM: A Well-Read Pre-trained Language Model for Chinese
- [6] Kiela D, Bartolo M, Nie Y X, Et al., Dynabench: Rethinking Benchmarking in NLP, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4110-4124, (2021)
- [7] Zhou J, Ke P, Qiu X P, Et al., ChatGPT: Potential, Prospects, and Limitations[J], Frontiers of Information Technology & Electronic Engineering
- [8] van Dis E, Bollen J, Zuidema W, Et al., ChatGPT: Five Priorities for Research, Nature, 614, 7947, pp. 224-226, (2023)
- [9] Thorp H H., ChatGPT is Fun, but Not an Author, Science, 379, 6630, (2023)
- [10] Qin C W, Zhang A, Zhang Z S, Et al., Is ChatGPT a General-Purpose Natural Language Processing Task Solver? [OL]