What to Hide from Your Students: Attention-Guided Masked Image Modeling

被引:36
作者
Kakogeorgiou, Ioannis [1 ]
Gidaris, Spyros [2 ]
Psomas, Bill [1 ]
Avrithis, Yannis [3 ,4 ]
Bursuc, Andrei [2 ]
Karantzalos, Konstantinos [1 ]
Komodakis, Nikos [5 ,6 ]
机构
[1] Natl Tech Univ Athens, Athens, Greece
[2] Valeo Ai, Paris, France
[3] Inst Adv Res Artificial Intelligence IARAI, Vienna, Austria
[4] Athena RC, Athens, Greece
[5] Univ Crete, Iraklion, Greece
[6] IACM Forth, Iraklion, Greece
来源
COMPUTER VISION - ECCV 2022, PT XXX | 2022年 / 13690卷
关键词
D O I
10.1007/978-3-031-20056-4_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformers and masked language modeling are quickly being adopted and explored in computer vision as vision transformers and masked image modeling (MIM). In this work, we argue that image token masking differs from token masking in text, due to the amount and correlation of tokens in an image. In particular, to generate a challenging pretext task for MIM, we advocate a shift from random masking to informed masking. We develop and exhibit this idea in the context of distillation-based MIM, where a teacher transformer encoder generates an attention map, which we use to guide masking for the student. We thus introduce a novel masking strategy, called attention-guided masking (AttMask), and we demonstrate its effectiveness over random masking for dense distillation-based MIM as well as plain distillation-based self-supervised learning on classification tokens. We confirm that AttMask accelerates the learning process and improves the performance on a variety of downstream tasks. We provide the implementation code at https://github.com/gkakogeorgiou/attmask.
引用
收藏
页码:300 / 318
页数:19
相关论文
共 71 条
  • [31] Revisiting Self-Supervised Visual Representation Learning
    Kolesnikov, Alexander
    Zhai, Xiaohua
    Beyer, Lucas
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1920 - 1929
  • [32] 3D Object Representations for Fine-Grained Categorization
    Krause, Jonathan
    Stark, Michael
    Deng, Jia
    Li Fei-Fei
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2013, : 554 - 561
  • [33] Krizhevsky A., 2009, CITESEER
  • [34] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [35] Unsupervised Representation Learning by Sorting Sequences
    Lee, Hsin-Ying
    Huang, Jia-Bin
    Singh, Maneesh
    Yang, Ming-Hsuan
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 667 - 676
  • [36] Li J., 2021, P INT C LEARN REPR I
  • [37] Li Z, 2021, ADV NEURAL INFORM PR, V34
  • [38] Microsoft COCO: Common Objects in Context
    Lin, Tsung-Yi
    Maire, Michael
    Belongie, Serge
    Hays, James
    Perona, Pietro
    Ramanan, Deva
    Dollar, Piotr
    Zitnick, C. Lawrence
    [J]. COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 : 740 - 755
  • [39] DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations
    Liu, Ziwei
    Luo, Ping
    Qiu, Shi
    Wang, Xiaogang
    Tang, Xiaoou
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1096 - 1104
  • [40] Self-Supervised Learning of Pretext-Invariant Representations
    Misra, Ishan
    van der Maaten, Laurens
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6706 - 6716