A brain-inspired object-based attention network for multiobject recognition and visual reasoning

被引:6
作者
Adeli, Hossein [1 ]
Ahn, Seoyoung [1 ]
Zelinsky, Gregory J. [1 ,2 ]
机构
[1] SUNY Stony Brook, Dept Psychol, Stony Brook, NY USA
[2] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY USA
关键词
CONVOLUTIONAL NEURAL-NETWORKS; ZOOM LENS; PERCEPTION; MODEL; MECHANISMS; GRADIENT; TASK;
D O I
10.1167/jov.23.5.16
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
The visual system uses sequences of selective glimpses to objects to support goal-directed behavior, but how is this attention control learned? Here we present an encoder-decoder model inspired by the interacting bottom-up and top-down visual pathways making up the recognition-attention system in the brain. At every iteration, a new glimpse is taken from the image and is processed through the "what" encoder, a hierarchy of feedforward, recurrent, and capsule layers, to obtain an object-centric (object-file) representation. This representation feeds to the "where" decoder, where the evolving recurrent representation provides top-down attentional modulation to plan subsequent glimpses and impact routing in the encoder. We demonstrate how the attention mechanism significantly improves the accuracy of classifying highly overlapping digits. In a visual reasoning task requiring comparison of two objects, our model achieves near-perfect accuracy and significantly outperforms larger models in generalizing to unseen stimuli. Our work demonstrates the benefits of object-based attention mechanisms taking sequential glimpses of objects.
引用
收藏
页数:17
相关论文
共 94 条
[1]   Deep-BCN: Deep networks meet biased competition to create a brain-inspired model of attention control [J].
Adeli, Hossein ;
Zelinsky, Gregory .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :2013-2023
[2]   A Model of the Superior Colliculus Predicts Fixation Locations during Scene Viewing and Visual Search [J].
Adeli, Hossein ;
Vitu, Francoise ;
Zelinsky, Gregory J. .
JOURNAL OF NEUROSCIENCE, 2017, 37 (06) :1453-1467
[3]   Reconstructing feedback representations in the ventral visual pathway with a generative adversarial autoencoder [J].
Al-Tahan, Haider ;
Mohsenzadeh, Yalda .
PLOS COMPUTATIONAL BIOLOGY, 2021, 17 (03)
[4]  
[Anonymous], 2008, Scholarpedia, DOI DOI 10.4249/SCHOLARPEDIA.5342
[5]  
Ba J. L., 2015, P 3 INT C LEARNING R
[6]  
Bakhtiari Shahab, 2021, ADV NEURAL INFORM PR
[7]   Neural Mechanisms of Object-Based Attention [J].
Baldauf, Daniel ;
Desimone, Robert .
SCIENCE, 2014, 344 (6182) :424-427
[8]   Going in circles is the way forward: the role of recurrence in visual inference [J].
Bergen, Ruben S. van ;
Kriegeskorte, Nikolaus .
CURRENT OPINION IN NEUROBIOLOGY, 2020, 65 :176-193
[9]   Attention, Intention, and Priority in the Parietal Lobe [J].
Bisley, James W. ;
Goldberg, Michael E. .
ANNUAL REVIEW OF NEUROSCIENCE, VOL 33, 2010, 33 :1-21
[10]   Generative Feedback Explains Distinct Brain Activity Codes for Seen and Mental Images [J].
Breedlove, Jesse L. ;
St-Yves, Ghislain ;
Olman, Cheryl A. ;
Naselaris, Thomas .
CURRENT BIOLOGY, 2020, 30 (12) :2211-+