MVEB: Self-Supervised Learning With Multi-View Entropy Bottleneck

被引:1
作者
Wen, Liangjian [1 ,2 ]
Wang, Xiasi [3 ]
Liu, Jianzhuang [4 ]
Xu, Zenglin [5 ,6 ]
机构
[1] Southwestern Univ Finance & Econ, Sch Comp & Artificial Intelligence, Chengdu 610074, Peoples R China
[2] Southwestern Univ Finance & Econ, Res Inst Digital Econ & Interdisciplinary Sci, Chengdu 610074, Peoples R China
[3] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[4] Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
[5] Harbin Inst Technol Shenzhen, Shenzhen 150001, Peoples R China
[6] Pengcheng Lab, Shenzhen 518066, Peoples R China
关键词
Task analysis; Entropy; Mutual information; Supervised learning; Feature extraction; Minimal sufficient representation; representation learning; self-supervised learning;
D O I
10.1109/TPAMI.2024.3380065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning aims to learn representation that can be effectively generalized to downstream tasks. Many self-supervised approaches regard two views of an image as both the input and the self-supervised signals, assuming that either view contains the same task-relevant information and the shared information is (approximately) sufficient for predicting downstream tasks. Recent studies show that discarding superfluous information not shared between the views can improve generalization. Hence, the ideal representation is sufficient for downstream tasks and contains minimal superfluous information, termed minimal sufficient representation. One can learn this representation by maximizing the mutual information between the representation and the supervised view while eliminating superfluous information. Nevertheless, the computation of mutual information is notoriously intractable. In this work, we propose an objective termed multi-view entropy bottleneck (MVEB) to learn minimal sufficient representation effectively. MVEB simplifies the minimal sufficient learning to maximizing both the agreement between the embeddings of two views and the differential entropy of the embedding distribution. Our experiments confirm that MVEB significantly improves performance. For example, it achieves top-1 accuracy of 76.9% on ImageNet with a vanilla ResNet-50 backbone on linear evaluation. To the best of our knowledge, this is the new state-of-the-art result with ResNet-50.
引用
收藏
页码:6097 / 6108
页数:12
相关论文
共 57 条
[31]  
Maji S, 2013, Arxiv, DOI arXiv:1306.5151
[32]   Automated flower classification over a large number of classes [J].
Nilsback, Maria-Elena ;
Zisserman, Andrew .
SIXTH INDIAN CONFERENCE ON COMPUTER VISION, GRAPHICS & IMAGE PROCESSING ICVGIP 2008, 2008, :722-729
[33]  
Oord A. v. d., 2018, Proc. Adv. Neural Inform. Process. Syst.
[34]  
Parkhi OM, 2012, PROC CVPR IEEE, P3498, DOI 10.1109/CVPR.2012.6248092
[35]  
Paszke A, 2019, ADV NEUR IN, V32
[36]  
Roeder G, 2017, ADV NEUR IN, V30
[37]  
Sridharan K., 2008, P ANN C LEARN THEOR
[38]   Exploring the Equivalence of Siamese Self-Supervised Learning via A Unified Gradient Framework [J].
Tao, Chenxin ;
Wang, Honghui ;
Zhu, Xizhou ;
Dong, Jiahua ;
Song, Shiji ;
Huang, Gao ;
Dai, Jifeng .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :14411-14420
[39]  
Tian Y., 2020, Proc. Adv. Neural Inform. Process. Syst.
[40]  
Tishby N., 2000, arXiv, DOI 10.48550/arXiv.physics/0004057.arXiv:physics/0004057v1physics.data-an