Constraint satisfaction in large language models

被引:0
作者
Jacobs, Cassandra L. [1 ,3 ]
MacDonald, Maryellen C. [2 ]
机构
[1] SUNY Buffalo, Dept Linguist, Buffalo, NY USA
[2] Univ Wisconsin Madison, Dept Psychol, Madison, WI USA
[3] Univ Buffalo, Dept Linguist, Buffalo, NY 14260 USA
基金
美国国家科学基金会;
关键词
Language comprehension; constraint satisfaction; ambiguity; connectionism; large language models; WORD RECOGNITION; LEXICAL ACCESS; EYE-MOVEMENTS; AMBIGUITY; INFORMATION; CONTEXT; RESOLUTION; FIT;
D O I
10.1080/23273798.2024.2364339
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Constraint satisfaction theories were prominent in the late 20th century and emphasized continuous, rich interaction between many sources of information in a linguistic signal unfolding over time. A major challenge was rigorously capturing these highly interactive comprehension processes and yielding explicit predictions, because the important constraints were numerous and changed in prominence from one context to the next. Connectionist models were conceptually well-suited to this, but researchers had insufficient computing power and lacked sufficiently large corpora to bring these models to bear. These limitations no longer hold, and large language models (LLMs) offer an opportunity to test constraint satisfaction ideas about human language comprehension. We consider how LLMs can be applied to study interactive processes with lexical ambiguity resolution as a test case. We argue that further study of LLMs can advance theories of constraint satisfaction, though gaps remain in our understanding of how people and LLMs combine linguistic information.
引用
收藏
页码:1231 / 1248
页数:18
相关论文
共 117 条
  • [21] Systematic testing of three Language Models reveals low language accuracy, absence of response stability, and a yes- response bias
    Dentella, Vittoria
    Guenther, Fritz
    Leivada, Evelina
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2023, 120 (51)
  • [22] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [23] Can AI language models replace human participants?
    Dillion, Danica
    Tandon, Niket
    Gu, Yuling
    Gray, Kurt
    [J]. TRENDS IN COGNITIVE SCIENCES, 2023, 27 (07) : 597 - 600
  • [24] FINDING STRUCTURE IN TIME
    ELMAN, JL
    [J]. COGNITIVE SCIENCE, 1990, 14 (02) : 179 - 211
  • [25] Ethayarajh K, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P55
  • [26] Phonological typicality influences on-line sentence comprehension
    Farmer, Thomas A.
    Christiansen, Morten H.
    Monaghan, Padraic
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2006, 103 (32) : 12203 - 12208
  • [27] THE INDEPENDENCE OF SYNTACTIC PROCESSING
    FERREIRA, F
    CLIFTON, C
    [J]. JOURNAL OF MEMORY AND LANGUAGE, 1986, 25 (03) : 348 - 368
  • [28] The 'Good Enough' Approach to Language Comprehension
    Ferreira, Fernanda
    Patson, Nikole D.
    [J]. LANGUAGE AND LINGUISTICS COMPASS, 2007, 1 (1-2): : 71 - 83
  • [29] Rapid Expectation Adaptation during Syntactic Comprehension
    Fine, Alex B.
    Jaeger, T. Florian
    Farmer, Thomas A.
    Qian, Ting
    [J]. PLOS ONE, 2013, 8 (10):
  • [30] ON MODULARITY IN SYNTACTIC PROCESSING
    FODOR, JD
    [J]. JOURNAL OF PSYCHOLINGUISTIC RESEARCH, 1988, 17 (02) : 125 - 168