On the Unexpected Abilities of Large Language Models

被引:3
作者
Nolfi, Stefano [1 ]
机构
[1] Natl Res Council CNR ISTC, Inst Cognit Sci & Technol, Via Giandomen Romagnosi 18a, I-00196 Rome, Italy
关键词
Large language models; implicit learning; emergence;
D O I
10.1177/10597123241256754
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs) are capable of displaying a wide range of abilities that are not directly connected with the task for which they are trained: predicting the next words of human-written texts. In this article, I review recent research investigating the cognitive abilities developed by LLMs and their relation to human cognition. I discuss the nature of the indirect process that leads to the acquisition of these cognitive abilities, their relation to other indirect processes, and the implications for the acquisition of integrated abilities. Moreover, I propose the factors that enable the development of abilities that are related only very indirectly to the proximal objective of the training task. Finally, I discuss whether the full set of capabilities that LLMs could possibly develop is predictable.
引用
收藏
页码:493 / 502
页数:10
相关论文
共 74 条
  • [1] Abdou Mostafa, 2021, ARXIV
  • [2] Akkaya I., 2019, CoRR
  • [3] AlexWarstadt Leshem Choshen, 2023, ARXIV
  • [4] Bai Y., 2022, Training a helpful and harmless assistant with reinforcement learning from human feedback
  • [5] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
    Bender, Emily M.
    Gebru, Timnit
    McMillan-Major, Angelina
    Shmitchell, Shmargaret
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 610 - 623
  • [6] Using cognitive psychology to understand GPT-3
    Binz, Marcel
    Schulz, Eric
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2023, 120 (06)
  • [7] Binz Marcel, 2023, arXiv
  • [8] Bommasani R., 2021, ARXIV
  • [9] Borghi A., 2023, SISTEMI INTELLIGENTI, V35, P367
  • [10] Bowman Samuel R., 2023, ARXIV