Rethinking Weak Supervision in Helping Contrastive Learning

被引:0
作者
Cui, Jingyi [1 ]
Huang, Weiran [2 ,3 ]
Wang, Yifei [4 ]
Wang, Yisen [1 ,5 ]
机构
[1] Peking Univ, Sch Intelligence Sci & Technol ogy, Natl Key Lab Gen Artificial Intelligence, Beijing, Peoples R China
[2] Shanghai Jiao Tong Univ, Qing Yuan Res Inst, Shanghai, Peoples R China
[3] Huawei Noahs Ark Lab, Montreal, PQ, Canada
[4] Peking Univ, Sch Math Sci, Beijing, Peoples R China
[5] Peking Univ, Inst Artificial Intelligence, Beijing, Peoples R China
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202 | 2023年 / 202卷
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contrastive learning has shown outstanding performances in both supervised and unsupervised learning, and has recently been introduced to solve weakly supervised learning problems such as semi-supervised learning and noisy label learning. Despite the empirical evidence showing that semi-supervised labels improve the representations of contrastive learning, it remains unknown if noisy supervised information can be directly used in training instead of after manual denoising. Therefore, to explore the mechanical differences between semi-supervised and noisy-labeled information in helping contrastive learning, we establish a unified theoretical framework of contrastive learning under weak supervision. Specifically, we investigate the most intuitive paradigm of jointly training supervised and unsupervised contrastive losses. By translating the weakly supervised information into a similarity graph under the framework of spectral clustering based on the posterior probability of weak labels, we establish the downstream classification error bound. We prove that semi-supervised labels improve the downstream error bound whereas noisy labels have limited effects under such a paradigm. Our theoretical findings here provide new insights for the community to rethink the role of weak supervision in helping contrastive learning.
引用
收藏
页数:20
相关论文
共 40 条
[1]  
Acharya A., 2022, ARXIV
[2]  
Aitchison L., 2021, ARXIV
[3]  
Arora Sanjeev, 2019, P MACHINE LEARNING R, V97
[4]  
Ash J., 2022, AISTATS
[5]  
Assran M., 2020, arXiv
[6]  
Bao H., 2022, ICML
[7]  
Chen Meilin, 2022, ICML
[8]  
Chen T., 2020, ICML
[9]   Transformer Tracking [J].
Chen, Xin ;
Yan, Bin ;
Zhu, Jiawen ;
Wang, Dong ;
Yang, Xiaoyun ;
Lu, Huchuan .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8122-8131
[10]  
Cheng Ho Kei, 2021, ARXIV