How Can Deep Neural Networks Inform Theory in Psychological Science?

被引:1
作者
McGrath, Sam Whitman [1 ,2 ]
Russin, Jacob [2 ,3 ]
Pavlick, Ellie [3 ,4 ]
Feiman, Roman [2 ,4 ]
机构
[1] Brown Univ, Dept Philosophy, Providence, RI USA
[2] Brown Univ, Dept Cognit & Psychol Sci, Providence, RI 02912 USA
[3] Brown Univ, Dept Comp Sci, Providence, RI USA
[4] Brown Univ, Program Linguist, Providence, RI 02912 USA
关键词
deep learning; neural networks; large language models; interpretability; psycholinguistics; cognitive development; philosophy of cognitive science; CONNECTIONISM; LANGUAGE; MODELS;
D O I
10.1177/09637214241268098
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Over the last decade, deep neural networks (DNNs) have transformed the state of the art in artificial intelligence. In domains such as language production and reasoning, long considered uniquely human abilities, contemporary models have proven capable of strikingly human-like performance. However, in contrast to classical symbolic models, neural networks can be inscrutable even to their designers, making it unclear what significance, if any, they have for theories of human cognition. Two extreme reactions are common. Neural network enthusiasts argue that, because the inner workings of DNNs do not seem to resemble any of the traditional constructs of psychological or linguistic theory, their success renders these theories obsolete and motivates a radical paradigm shift. Neural network skeptics instead take this inability to interpret DNNs in psychological terms to mean that their success is irrelevant to psychological science. In this article, we review recent work that suggests that the internal mechanisms of DNNs can, in fact, be interpreted in the functional terms characteristic of psychological explanations. We argue that this undermines the shared assumption of both extremes and opens the door for DNNs to inform theories of cognition and its development.
引用
收藏
页码:325 / 333
页数:9
相关论文
共 38 条
[1]  
Brown TB, 2020, Arxiv, DOI arXiv:2005.14165
[2]   Deep problems with neural network models of human vision [J].
Bowers, Jeffrey S. ;
Malhotra, Gaurav ;
Dujmovic, Marin ;
Llera Montero, Milton ;
Tsvetkov, Christian ;
Biscione, Valerio ;
Puebla, Guillermo ;
Adolfi, Federico ;
Hummel, John E. ;
Heaton, Rachel F. ;
Evans, Benjamin D. ;
Mitchell, Jeffrey ;
Blything, Ryan .
BEHAVIORAL AND BRAIN SCIENCES, 2022, 46
[3]   Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour [J].
Buckner, Cameron .
BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE, 2023, 74 (03) :681-712
[4]  
Chen A, 2024, Arxiv, DOI arXiv:2309.07311
[5]  
Elhage N, 2021, Transformer Circuits Thread
[6]   The development of reasoning by exclusion in infancy [J].
Feiman, Roman ;
Mody, Shilpa ;
Carey, Susan .
COGNITIVE PSYCHOLOGY, 2022, 135
[7]   Performance vs. competence in human-machine comparisons [J].
Firestone, Chaz .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (43) :26562-26571
[8]   CONNECTIONISM AND COGNITIVE ARCHITECTURE - A CRITICAL ANALYSIS [J].
FODOR, JA ;
PYLYSHYN, ZW .
COGNITION, 1988, 28 (1-2) :3-71
[9]  
Geiger Atticus, 2021, Advances in Neural Information Processing Systems, V34
[10]  
Gleitman L., 1990, Language acquisition, V1, P3, DOI DOI 10.1207/S15327817LA01012