Mixed Autoencoder for Self-supervised Visual Representation Learning

被引:10
作者
Chen, Kai [1 ]
Liu, Zhili [1 ,2 ]
Hong, Lanqing [2 ]
Xu, Hang [2 ]
Li, Zhenguo [2 ]
Yeung, Dit-Yan [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[2] Huawei Noahs Ark Lab, Montreal, PQ, Canada
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.02178
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Masked Autoencoder (MAE) has demonstrated superior performance on various vision tasks via randomly masking image patches and reconstruction. However, effective data augmentation strategies for MAE still remain open questions, different from those in contrastive learning that serve as the most important part. This paper studies the prevailing mixing augmentation for MAE. We first demonstrate that naive mixing will in contrast degenerate model performance due to the increase of mutual information (MI). To address, we propose homologous recognition, an auxiliary pretext task, not only to alleviate the MI increasement by explicitly requiring each patch to recognize homologous patches, but also to perform object-aware self-supervised pre-training for better downstream dense perception performance. With extensive experiments, we demonstrate that our proposed Mixed Autoencoder (MixedAE) achieves the state-of-the-art transfer results among masked image modeling (MIM) augmentations on different downstream tasks with significant efficiency. Specifically, our MixedAE outperforms MAE by +0.3% accuracy, +1.7 mIoU and +0.9 AP on ImageNet-1K, ADE20K and COCO respectively with a standard ViT-Base. Moreover, MixedAE surpasses iBOT, a strong MIM method combined with instance discrimination, while accelerating training by 2x. To our best knowledge, this is the very first work to consider mixing for MIM from the perspective of pretext task design. Code will be made available.
引用
收藏
页码:22742 / 22751
页数:10
相关论文
共 49 条
  • [1] [Anonymous], 2018, ECCV, DOI DOI 10.1007/978-3-030-01249-611
  • [2] Ba J. L., 2016, Layer Normalization
  • [3] Baevski Alexei, 2022, PMLR
  • [4] Bao H., 2021, PROC INT C LEARN REP
  • [5] Caron M, 2020, ADV NEUR IN, V33
  • [6] Caron Mathilde, 2021, ICCV
  • [7] Chen Kai, 2021, ICCV
  • [8] Chen T., 2020, arXiv, V119, P1597
  • [9] Chen X., 2021, arXiv preprint arXiv:2104.02057, P2021
  • [10] Ferroptosis: machinery and regulation
    Chen, Xin
    Li, Jingbo
    Kang, Rui
    Klionsky, Daniel J.
    Tang, Daolin
    [J]. AUTOPHAGY, 2021, 17 (09) : 2054 - 2081