Modality-Agnostic Self-Supervised Learning with Meta-Learned Masked Auto-Encoder

被引:0
作者
Jang, Huiwon [1 ]
Tack, Jihoon [1 ]
Choi, Daewon [2 ]
Jeong, Jongheon [1 ]
Shin, Jinwoo [1 ]
机构
[1] Korea Adv Inst Sci & Technol KAIST, Daejeon, South Korea
[2] Korea Univ, Seoul, South Korea
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite its practical importance across a wide range of modalities, recent advances in self-supervised learning (SSL) have been primarily focused on a few well-curated domains, e.g., vision and language, often relying on their domain-specific knowledge. For example, Masked Auto-Encoder (MAE) has become one of the popular architectures in these domains, but less has explored its potential in other modalities. In this paper, we develop MAE as a unified, modality-agnostic SSL framework. In turn, we argue meta-learning as a key to interpreting MAE as a modality-agnostic learner, and propose enhancements to MAE from the motivation to jointly improve its SSL across diverse modalities, coined MetaMAE as a result. Our key idea is to view the mask reconstruction of MAE as a meta-learning task: masked tokens are predicted by adapting the Transformer meta-learner through the amortization of unmasked tokens. Based on this novel interpretation, we propose to integrate two advanced meta-learning techniques. First, we adapt the amortized latent of the Transformer encoder using gradient-based meta-learning to enhance the reconstruction. Then, we maximize the alignment between amortized and adapted latents through task contrastive learning which guides the Transformer encoder to better encode the task-specific knowledge. Our experiment demonstrates the superiority of MetaMAE in the modality-agnostic SSL benchmark (called DABS), significantly outperforming prior baselines. Code is available at https://github.com/alinlab/MetaMAE.
引用
收藏
页数:19
相关论文
共 109 条
  • [11] Choe S. K., 2023, INT C LEARN REPR
  • [12] Describing Textures in the Wild
    Cimpoi, Mircea
    Maji, Subhransu
    Kokkinos, Iasonas
    Mohamed, Sammy
    Vedaldi, Andrea
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 3606 - 3613
  • [13] ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS
    Clark, Kevin
    Luong, Minh-Thang
    Le, Quoc V.
    Manning, Christopher D.
    [J]. INFORMATION SYSTEMS RESEARCH, 2020,
  • [14] Clements J. M., 2020, ARXIV
  • [15] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [16] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [17] Unsupervised Visual Representation Learning by Context Prediction
    Doersch, Carl
    Gupta, Abhinav
    Efros, Alexei A.
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1422 - 1430
  • [18] The Pfam protein families database in 2019
    El-Gebali, Sara
    Mistry, Jaina
    Bateman, Alex
    Eddy, Sean R.
    Luciani, Aurelien
    Potter, Simon C.
    Qureshi, Matloob
    Richardson, Lorna J.
    Salazar, Gustavo A.
    Smart, Alfredo
    Sonnhammer, Erik L. L.
    Hirsh, Layla
    Paladin, Lisanna
    Piovesan, Damiano
    Tosatto, Silvio C. E.
    Finn, Robert D.
    [J]. NUCLEIC ACIDS RESEARCH, 2019, 47 (D1) : D427 - D432
  • [19] Esser Patrick, 2021, IEEE C COMP VIS PATT
  • [20] Fan H., 2022, IEEE C COMP VIS PATT