共 202 条
[1]
Petroni F., Et al., Language models as knowledge bases?, (2019)
[2]
West P., Et al., Symbolic knowledge distillation: from general language models to commonsense models, (2021)
[3]
Sclar M., West P., Kumar S., Tsvetkov Y., Choi Y., Referee: Reference-free sentence summarization with sharper controllability through symbolic knowledge distillation, (2022)
[4]
Gulcehre C., Et al., Reinforced self-training (rest) for language modeling, (2023)
[5]
Zhao W.X., Et al., A survey of large language models, (2023)
[6]
Min B., Et al., Recent advances in natural language processing via large pre-trained language models: A survey, ACM Comput. Surv., 56, 2, pp. 1-40, (2023)
[7]
Hadi M.U., Et al., Large language models: A comprehensive survey of its applications, challenges, limitations, and future prospects, Authorea Preprints, (2023)
[8]
Chang Y., Et al., A survey on evaluation of large language models, (2023)
[9]
Zan D., Et al., Large language models meet NL2CODE: A survey, Proc. 61st Annu. Meeting Assoc. Comput. Linguistics (Long Papers), 1, pp. 7443-7464, (2023)
[10]
Kasneci E., Et al., ChatGPT for good? On opportunities and challenges of large language models for education, Learn. Individual Differences, 103, (2023)