The attentive reconstruction of objects facilitates robust object recognition

被引:0
作者
Ahn, Seoyoung [1 ]
Adeli, Hossein [2 ]
Zelinsky, Gregory J. [3 ,4 ]
机构
[1] Univ Calif Berkeley, Dept Mol & Cell Biol, Berkeley, CA 94720 USA
[2] Columbia Univ, Zuckerman Mind Brain Behav Inst, New York, NY USA
[3] SUNY Stony Brook, Dept Psychol, Stony Brook, NY USA
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY USA
关键词
TOP-DOWN FACILITATION; VISUAL-ATTENTION; NEURAL MECHANISMS; BAYESIAN-INFERENCE; PERCEPTION; MODEL; SHAPE; INTEGRATION; SPOTLIGHT; NETWORKS;
D O I
10.1371/journal.pcbi.1012159
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Humans are extremely robust in our ability to perceive and recognize objects-we see faces in tea stains and can recognize friends on dark streets. Yet, neurocomputational models of primate object recognition have focused on the initial feed-forward pass of processing through the ventral stream and less on the top-down feedback that likely underlies robust object perception and recognition. Aligned with the generative approach, we propose that the visual system actively facilitates recognition by reconstructing the object hypothesized to be in the image. Top-down attention then uses this reconstruction as a template to bias feedforward processing to align with the most plausible object hypothesis. Building on auto-encoder neural networks, our model makes detailed hypotheses about the appearance and location of the candidate objects in the image by reconstructing a complete object representation from potentially incomplete visual input due to noise and occlusion. The model then leverages the best object reconstruction, measured by reconstruction error, to direct the bottom-up process of selectively routing low-level features, a top-down biasing that captures a core function of attention. We evaluated our model using the MNIST-C (handwritten digits under corruptions) and ImageNet-C (real-world objects under corruptions) datasets. Not only did our model achieve superior performance on these challenging tasks designed to approximate real-world noise and occlusion viewing conditions, but also better accounted for human behavioral reaction times and error patterns than a standard feedforward Convolutional Neural Network. Our model suggests that a complete understanding of object perception and recognition requires integrating top-down and attention feedback, which we propose is an object reconstruction. Humans can dream and imagine things, and this means that the human brain can generate perceptions of things that are not there. We propose that humans evolved this generation capability, not solely to have more vivid dreams, but to help us better understand the world, especially when what we see is unclear or missing some details (due to occlusion, changing perspective, etc.). Through a combination of computational modeling and behavioral experiments, we demonstrate how the process of generating objects-actively reconstructing the most plausible object representation from noisy visual input-guides attention towards specific features or locations within an image (known as functions of top-down attention), thereby enhancing the system's robustness to various types of noise and corruption. We found that this generative attention mechanism could explain, not only the time that it took people to recognize challenging objects, but also the types of recognition errors made by people (seeing an object as one thing when it was really another). These findings contribute to a deeper understanding of the computational mechanisms of attention in the brain and their potential connection to the generative processes that facilitate robust object recognition.
引用
收藏
页数:28
相关论文
共 110 条
  • [1] A brain-inspired object-based attention network for multiobject recognition and visual reasoning
    Adeli, Hossein
    Ahn, Seoyoung
    Zelinsky, Gregory J.
    [J]. JOURNAL OF VISION, 2023, 23 (05):
  • [2] Use of superordinate labels yields more robust and human-like visual representations in convolutional neural networks
    Ahn, Seoyoung
    Zelinsky, Gregory J.
    Lupyan, Gary
    [J]. JOURNAL OF VISION, 2021, 21 (13):
  • [3] Reconstructing feedback representations in the ventral visual pathway with a generative adversarial autoencoder
    Al-Tahan, Haider
    Mohsenzadeh, Yalda
    [J]. PLOS COMPUTATIONAL BIOLOGY, 2021, 17 (03)
  • [4] [Anonymous], 2018, A guide to convolution arithmetic for deep learning
  • [5] Contrast sensitivity in human visual areas and its relationship to object recognition
    Avidan, G
    Harel, M
    Hendler, T
    Ben-Bashat, D
    Zohary, E
    Malach, R
    [J]. JOURNAL OF NEUROPHYSIOLOGY, 2002, 87 (06) : 3102 - 3116
  • [6] Deep convolutional networks do not classify based on global object shape
    Baker, Nicholas
    Lu, Hongjing
    Erlikhman, Gennady
    Kellman, Philip J.
    [J]. PLOS COMPUTATIONAL BIOLOGY, 2018, 14 (12)
  • [7] A cortical mechanism for triggering top-down facilitation in visual object recognition
    Bar, M
    [J]. JOURNAL OF COGNITIVE NEUROSCIENCE, 2003, 15 (04) : 600 - 609
  • [8] Top-down facilitation of visual recognition
    Bar, M
    Kassam, KS
    Ghuman, AS
    Boshyan, J
    Schmidt, AM
    Dale, AM
    Hämäläinen, MS
    Marinkovic, K
    Schacter, DL
    Rosen, BR
    Halgren, E
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2006, 103 (02) : 449 - 454
  • [9] Top-down and bottom-up mechanisms in biasing competition in the human brain
    Beck, Diane M.
    Kastner, Sabine
    [J]. VISION RESEARCH, 2009, 49 (10) : 1154 - 1165
  • [10] Parietal cortex and attention
    Behrmann, M
    Geng, JJ
    Shomstein, S
    [J]. CURRENT OPINION IN NEUROBIOLOGY, 2004, 14 (02) : 212 - 217