Understanding models understanding language

被引:7
|
作者
Sogaard, Anders [1 ,2 ]
机构
[1] Univ Copenhagen, Pioneer Ctr Artificial Intelligence, Dept Comp Sci, Lyngbyvej 2, DK-2100 Copenhagen, Denmark
[2] Univ Copenhagen, Dept Philosophy, Lyngbyvej 2, DK-2100 Copenhagen, Denmark
关键词
Artificial intelligence; Language; Mind; BRAINS;
D O I
10.1007/s11229-022-03931-4
中图分类号
N09 [自然科学史]; B [哲学、宗教];
学科分类号
01 ; 0101 ; 010108 ; 060207 ; 060305 ; 0712 ;
摘要
Landgrebe and Smith (Synthese 198(March):2061-2081, 2021) present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligence-perhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture. Here I address a number of misunderstandings in their analysis, and present what I take to be a more adequate analysis of the ability of Transformer models to learn natural language semantics. To avoid confusion, I distinguish between inferential and referential semantics. Landgrebe and Smith (2021)'s analysis of the Transformer architecture's expressivity and generalization concerns inferential semantics. This part of their diagnosis is shown to rely on misunderstandings of technical properties of Transformers. Landgrebe and Smith (2021) also claim that referential semantics is unobtainable for Transformer models. In response, I present a non-technical discussion of techniques for grounding Transformer models, giving them referential semantics, even in the absence of supervision. I also present a simple thought experiment to highlight the mechanisms that would lead to referential semantics, and discuss in what sense models that are grounded in this way, can be said to understand language. Finally, I discuss the approach Landgrebe and Smith (2021) advocate for, namely manual specification of formal grammars that associate linguistic expressions with logical form.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Some Considerations on the Understanding of the Language Used in Religious Texts
    Ozer, Salih
    BEYTULHIKME-AN INTERNATIONAL JOURNAL OF PHILOSOPHY, 2021, 11 (01): : 419 - 448
  • [42] Language, belief and plurality: a contribution to understanding religious diversity
    Marciano Adilio Spica
    International Journal for Philosophy of Religion, 2018, 83 : 169 - 181
  • [43] Knowledge-Intensive Language Understanding for Explainable AI
    Sheth, Amit
    Gaur, Manas
    Roy, Kaushik
    Faldu, Keyur
    IEEE INTERNET COMPUTING, 2021, 25 (05) : 19 - 24
  • [44] Sources of Understanding in Supervised Machine Learning Models
    Pirozelli P.
    Philosophy & Technology, 2022, 35 (2)
  • [45] Understanding, learning and language: a critique of the hermeneutical approach br
    Back, Rainri
    TRANS-FORM-ACAO, 2023, 46 (03): : 201 - 224
  • [46] The copertenencia between man and the world in the understanding Heidegger of language
    Beytia, Pablo
    EIKASIA-REVISTA DE FILOSOFIA, 2014, (55): : 93 - 105
  • [47] Towards understanding language organisation in the brain using fMRI
    Matthews, PM
    Adcock, J
    Chen, Y
    Fu, S
    Devlin, JT
    Rushworth, MFS
    Smith, S
    Beckmann, C
    Iversen, S
    HUMAN BRAIN MAPPING, 2003, 18 (03) : 239 - 247
  • [48] Gadamer and ALL: A hermeneutic understanding of Academic Language and Learning
    McCormack, Robin
    JOURNAL OF ACADEMIC LANGUAGE AND LEARNING, 2014, 8 (03): : A49 - A61
  • [49] Language understanding is grounded in experiential simulations: a response to Weiskopf
    Gibbs, Raymond W., Jr.
    Perlman, Marcus
    STUDIES IN HISTORY AND PHILOSOPHY OF SCIENCE, 2010, 41 (03): : 305 - 308
  • [50] Knowledge and Implicature: Modeling Language Understanding as Social Cognition
    Goodman, Noah D.
    Stuhlmueller, Andreas
    TOPICS IN COGNITIVE SCIENCE, 2013, 5 (01) : 173 - 184