Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream

被引:524
|
作者
Guclu, Umut [1 ]
van Gerven, Marcel A. J. [1 ]
机构
[1] Radboud Univ Nijmegen, Donders Inst Brain Cognit & Behav, NL-6525 HR Nijmegen, Netherlands
来源
JOURNAL OF NEUROSCIENCE | 2015年 / 35卷 / 27期
关键词
deep learning; functional magnetic resonance imaging; neural coding; FUNCTIONAL ARCHITECTURE; INFEROTEMPORAL CORTEX; RECEPTIVE-FIELDS; NATURAL IMAGES; VISUAL AREAS; OBJECT; NEURONS; MODEL; RECOGNITION; PERCEPTION;
D O I
10.1523/JNEUROSCI.5023-14.2015
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Converging evidence suggests that the primate ventral visual pathway encodes increasingly complex stimulus features in downstream areas. We quantitatively show that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain. This was achieved by mapping thousands of stimulus features of increasing complexity across the cortical sheet using a deep neural network. Our approach also revealed a fine-grained functional specialization of downstream areas of the ventral stream. Furthermore, it allowed decoding of representations from human brain activity at an unsurpassed degree of accuracy, confirming the quality of the developed approach. Stimulus features that successfully explained neural responses indicate that population receptive fields were explicitly tuned for object categorization. This provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream.
引用
收藏
页码:10005 / 10014
页数:10
相关论文
共 50 条
  • [1] Face Space Representations in Deep Convolutional Neural Networks
    O'Toole, Alice J.
    Castillo, Carlos D.
    Parde, Connor J.
    Hill, Matthew Q.
    Chellappa, Rama
    TRENDS IN COGNITIVE SCIENCES, 2018, 22 (09) : 794 - 809
  • [2] Embedding Complexity of Learned Representations in Neural Networks
    Kuzma, Tomas
    Farkas, Igor
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 518 - 528
  • [3] Deep learning in spiking neural networks
    Tavanaei, Amirhossein
    Ghodrati, Masoud
    Kheradpisheh, Saeed Reza
    Masquelier, Timothee
    Maida, Anthony
    NEURAL NETWORKS, 2019, 111 : 47 - 63
  • [4] Exploring Internal Representations of Deep Neural Networks
    Despraz, Jeremie
    Gomez, Stephane
    Satizabal, Hector F.
    Pena-Reyes, Carlos Andres
    COMPUTATIONAL INTELLIGENCE, IJCCI 2017, 2019, 829 : 119 - 138
  • [5] Dropout Rademacher complexity of deep neural networks
    Wei Gao
    Zhi-Hua Zhou
    Science China Information Sciences, 2016, 59
  • [6] Dropout Rademacher complexity of deep neural networks
    Gao, Wei
    Zhou, Zhi-Hua
    SCIENCE CHINA-INFORMATION SCIENCES, 2016, 59 (07)
  • [7] Dropout Rademacher complexity of deep neural networks
    Wei GAO
    Zhi-Hua ZHOU
    Science China(Information Sciences), 2016, 59 (07) : 173 - 184
  • [8] Intracranial Electroencephalography and Deep Neural Networks Reveal Shared Substrates for Representations of Face Identity and Expressions
    Schwartz, Emily
    Alreja, Arish
    Richardson, R. Mark
    Ghuman, Avniel
    Anzellotti, Stefano
    JOURNAL OF NEUROSCIENCE, 2023, 43 (23): : 4291 - 4303
  • [9] Reframing Neural Networks: Deep Structure in Overcomplete Representations
    Murdock, Calvin
    Cazenavette, George
    Lucey, Simon
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 964 - 979
  • [10] Unsupervised neural network models of the ventral visual stream
    Zhuang, Chengxu
    Yan, Siming
    Nayebi, Aran
    Schrimpf, Martin
    Frank, Michael C.
    DiCarlo, James J.
    Yamins, Daniel L. K.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (03)