Shared computational principles for language processing in humans and deep language models

被引:158
作者
Goldstein, Ariel [1 ,2 ,3 ]
Zada, Zaid [1 ,2 ]
Buchnik, Eliav [3 ]
Schain, Mariano [3 ]
Price, Amy [1 ,2 ]
Aubrey, Bobbi [1 ,2 ,4 ]
Nastase, Samuel A. [1 ,2 ]
Feder, Amir [3 ]
Emanuel, Dotan [3 ]
Cohen, Alon [3 ]
Jansen, Aren [3 ]
Gazula, Harshvardhan [1 ,2 ]
Choe, Gina [1 ,2 ,4 ]
Rao, Aditi [1 ,2 ,4 ]
Kim, Catherine [1 ,2 ,4 ]
Casto, Colton [1 ,2 ]
Fanda, Lora [4 ]
Doyle, Werner [4 ]
Friedman, Daniel [4 ]
Dugan, Patricia [4 ]
Melloni, Lucia [5 ]
Reichart, Roi [6 ]
Devore, Sasha [4 ]
Flinker, Adeen [4 ]
Hasenfratz, Liat [1 ,2 ]
Levy, Omer [7 ]
Hassidim, Avinatan [3 ]
Brenner, Michael [3 ,8 ]
Matias, Yossi [3 ]
Norman, Kenneth A. [1 ,2 ]
Devinsky, Orrin [4 ]
Hasson, Uri [1 ,2 ,3 ]
机构
[1] Princeton Univ, Dept Psychol, Princeton, NJ 08544 USA
[2] Princeton Univ, Neurosci Inst, Princeton, NJ 08544 USA
[3] Google Res, Mountain View, CA 94043 USA
[4] NYU, Grossman Sch Med, New York, NY USA
[5] Max Planck Inst Empir Aesthet, Frankfurt, Germany
[6] Technion Israel Inst Technol, Fac Ind Engn & Management, Haifa, Israel
[7] Tel Aviv Univ, Blavatnik Sch Comp Sci, Tel Aviv, Israel
[8] Harvard Univ, Sch Engn & Appl Sci, Cambridge, MA 02138 USA
基金
美国国家卫生研究院;
关键词
WORDS; TIMESCALES; COMPONENT; FUTURE;
D O I
10.1038/s41593-022-01026-4
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language. Deep language models have revolutionized natural language processing. The paper discovers three computational principles shared between deep language models and the human brain, which can transform our understanding of the neural basis of language.
引用
收藏
页码:369 / +
页数:28
相关论文
共 67 条
  • [1] [Anonymous], 1965, ASPECTS THEORY SYNTA, DOI [DOI 10.21236/AD0616323, 10.21236/AD0616323, 10.21236/ad0616323]
  • [2] Athanasiou N, 2018, P 27 INT C COMPUTATI, P2867
  • [3] Prediction and memory: A predictive coding account
    Barron, Helen C.
    Auksztulewicz, Ryszard
    Friston, Karl
    [J]. PROGRESS IN NEUROBIOLOGY, 2020, 192
  • [4] CONTROLLING THE FALSE DISCOVERY RATE - A PRACTICAL AND POWERFUL APPROACH TO MULTIPLE TESTING
    BENJAMINI, Y
    HOCHBERG, Y
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 1995, 57 (01) : 289 - 300
  • [5] Boer Bde., 2003, ARTIF LIFE, V9, P89
  • [6] Statistical modeling: The two cultures
    Breiman, L
    [J]. STATISTICAL SCIENCE, 2001, 16 (03) : 199 - 215
  • [7] Brown TB, 2020, ADV NEUR IN, V33
  • [8] Alternatives to the combinatorial paradigm of linguistic theory based on domain general principles of human cognition
    Bybee, J
    Mcclelland, JL
    [J]. LINGUISTIC REVIEW, 2005, 22 (2-4) : 381 - 410
  • [9] Caucheteux C., 2021, GPT-2's activations predict the degree of semantic comprehension in the human brain, DOI DOI 10.1101/2021.04.20.440622
  • [10] Processing Timescales as an Organizing Principle for Primate Cortex
    Chen, Janice
    Hasson, Uri
    Honey, Christopher J.
    [J]. NEURON, 2015, 88 (02) : 244 - 246