Understanding models understanding language

被引:7
|
作者
Sogaard, Anders [1 ,2 ]
机构
[1] Univ Copenhagen, Pioneer Ctr Artificial Intelligence, Dept Comp Sci, Lyngbyvej 2, DK-2100 Copenhagen, Denmark
[2] Univ Copenhagen, Dept Philosophy, Lyngbyvej 2, DK-2100 Copenhagen, Denmark
关键词
Artificial intelligence; Language; Mind; BRAINS;
D O I
10.1007/s11229-022-03931-4
中图分类号
N09 [自然科学史]; B [哲学、宗教];
学科分类号
01 ; 0101 ; 010108 ; 060207 ; 060305 ; 0712 ;
摘要
Landgrebe and Smith (Synthese 198(March):2061-2081, 2021) present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligence-perhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture. Here I address a number of misunderstandings in their analysis, and present what I take to be a more adequate analysis of the ability of Transformer models to learn natural language semantics. To avoid confusion, I distinguish between inferential and referential semantics. Landgrebe and Smith (2021)'s analysis of the Transformer architecture's expressivity and generalization concerns inferential semantics. This part of their diagnosis is shown to rely on misunderstandings of technical properties of Transformers. Landgrebe and Smith (2021) also claim that referential semantics is unobtainable for Transformer models. In response, I present a non-technical discussion of techniques for grounding Transformer models, giving them referential semantics, even in the absence of supervision. I also present a simple thought experiment to highlight the mechanisms that would lead to referential semantics, and discuss in what sense models that are grounded in this way, can be said to understand language. Finally, I discuss the approach Landgrebe and Smith (2021) advocate for, namely manual specification of formal grammars that associate linguistic expressions with logical form.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] The Grievance Dictionary: Understanding threatening language use
    van der Vegt, Isabelle
    Mozes, Maximilian
    Kleinberg, Bennett
    Gill, Paul
    BEHAVIOR RESEARCH METHODS, 2021, 53 (05) : 2105 - 2119
  • [32] The Grievance Dictionary: Understanding threatening language use
    Isabelle van der Vegt
    Maximilian Mozes
    Bennett Kleinberg
    Paul Gill
    Behavior Research Methods, 2021, 53 : 2105 - 2119
  • [33] Quantitative mental state attributions in language understanding
    Jara-Ettinger, Julian
    Rubio-Fernandez, Paula
    SCIENCE ADVANCES, 2021, 7 (47)
  • [34] Understanding and Mis-understanding in Language in Brian O'Nolan's An Beal Bocht and Cruiskeen Lawn
    Abella, Julieta
    ESTUDIOS IRLANDESES, 2020, (15) : 1 - 11
  • [36] Research Methods for Understanding Child Second Language Development
    Yin, Chia-Hsin
    APPLIED LINGUISTICS, 2023,
  • [37] Understanding of Dementia in the Polish Language: A Frame Semantic Approach
    Mackowiak, Maria
    Libura, Agnieszka
    Phillipson, Lyn
    Szczesniak, Dorota
    Rymaszewska, Joanna
    JOURNAL OF ALZHEIMERS DISEASE, 2023, 91 (01) : 389 - 406
  • [38] Research methods for understanding child second language development
    Gao, Yuan
    Lu, Min
    Butler, Yuko Goto
    Huang, Becky H.
    EDUCATION 3-13, 2023,
  • [39] Recent Trends in Natural Language Understanding for Procedural Knowledge
    Mujtaba, Dena F.
    Mahapatra, Nihar R.
    2019 6TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2019), 2019, : 420 - 424
  • [40] Understanding the language of entrepreneurship: An exploratory analysis of entrepreneurial discourse
    Roundy, Philip T.
    Asllani, Arben
    JOURNAL OF ECONOMIC AND ADMINISTRATIVE SCIENCES, 2019, 35 (02) : 113 - 127