What to Hide from Your Students: Attention-Guided Masked Image Modeling

被引:36
作者
Kakogeorgiou, Ioannis [1 ]
Gidaris, Spyros [2 ]
Psomas, Bill [1 ]
Avrithis, Yannis [3 ,4 ]
Bursuc, Andrei [2 ]
Karantzalos, Konstantinos [1 ]
Komodakis, Nikos [5 ,6 ]
机构
[1] Natl Tech Univ Athens, Athens, Greece
[2] Valeo Ai, Paris, France
[3] Inst Adv Res Artificial Intelligence IARAI, Vienna, Austria
[4] Athena RC, Athens, Greece
[5] Univ Crete, Iraklion, Greece
[6] IACM Forth, Iraklion, Greece
来源
COMPUTER VISION - ECCV 2022, PT XXX | 2022年 / 13690卷
关键词
D O I
10.1007/978-3-031-20056-4_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformers and masked language modeling are quickly being adopted and explored in computer vision as vision transformers and masked image modeling (MIM). In this work, we argue that image token masking differs from token masking in text, due to the amount and correlation of tokens in an image. In particular, to generate a challenging pretext task for MIM, we advocate a shift from random masking to informed masking. We develop and exhibit this idea in the context of distillation-based MIM, where a teacher transformer encoder generates an attention map, which we use to guide masking for the student. We thus introduce a novel masking strategy, called attention-guided masking (AttMask), and we demonstrate its effectiveness over random masking for dense distillation-based MIM as well as plain distillation-based self-supervised learning on classification tokens. We confirm that AttMask accelerates the learning process and improves the performance on a variety of downstream tasks. We provide the implementation code at https://github.com/gkakogeorgiou/attmask.
引用
收藏
页码:300 / 318
页数:19
相关论文
共 71 条
  • [1] Look, Listen and Learn
    Arandjelovic, Relja
    Zisserman, Andrew
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 609 - 617
  • [2] Asano Y.M., 2020, INT C LEARNING REPRE
  • [3] STORM-GAN: Spatio-Temporal Meta-GAN for Cross-City Estimation of Human Mobility Responses to COVID-
    Bao, Han
    Zhou, Xun
    Xie, Yiqun
    Li, Yanhua
    Jia, Xiaowei
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 1 - 10
  • [4] Caron M, 2020, ADV NEUR IN, V33
  • [5] Emerging Properties in Self-Supervised Vision Transformers
    Caron, Mathilde
    Touvron, Hugo
    Misra, Ishan
    Jegou, Herve
    Mairal, Julien
    Bojanowski, Piotr
    Joulin, Armand
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9630 - 9640
  • [6] Unsupervised Pre-Training of Image Features on Non-Curated Data
    Caron, Mathilde
    Bojanowski, Piotr
    Mairal, Julien
    Joulin, Armand
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2959 - 2968
  • [7] Deep Clustering for Unsupervised Learning of Visual Features
    Caron, Mathilde
    Bojanowski, Piotr
    Joulin, Armand
    Douze, Matthijs
    [J]. COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 : 139 - 156
  • [8] Chen T, 2020, PR MACH LEARN RES, V119
  • [9] Exploring Simple Siamese Representation Learning
    Chen, Xinlei
    He, Kaiming
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15745 - 15753
  • [10] Child R, 2019, Arxiv, DOI arXiv:1904.10509