Bootstrap contrastive domain adaptation

被引:0
作者
Jia, Yan [1 ]
Cheng, Yuqing [2 ]
Qiao, Peng [1 ]
机构
[1] Natl Univ Def Technol, Coll Comp Sci & Technol, Changsha 410073, Hunan, Peoples R China
[2] Natl Univ Def Technol, Coll Syst Engn, Changsha 410073, Hunan, Peoples R China
关键词
Unsupervised domain adaptation; Transfer learning; Contrastive learning; Self-supervised learning; Asymmetric networks;
D O I
10.1007/s12293-024-00422-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning, particularly through contrastive learning, has shown significant promise in vision tasks. Although effective, contrastive learning faces the issue of false negatives, particularly under domain shifts in domain adaptation scenarios. The Bootstrap Your Own Latent approach, with its asymmetric structure and avoidance of unnecessary negative samples, offers a foundation to address this issue, which remains underexplored in domain adaptation. We introduce an asymmetrically structured network, the Bootstrap Contrastive Domain Adaptation (BCDA), that innovatively applies contrastive learning to domain adaptation. BCDA utilizes a bootstrap clustering positive sampling strategy to ensure stable, end-to-end domain adaptation, preventing model collapse often seen in asymmetric networks. This method not only aligns domains internally through mean square loss but also enhances semantic inter-domain alignment, effectively eliminating false negatives. Our approach, BCDA, represents the first foray into non-contrastive domain adaptation and could serve as a foundational model for future studies. It shows potential to supersede contrastive domain adaptation methods in eliminating false negatives, evidenced by high-level results on three well-known domain adaptation benchmark datasets.
引用
收藏
页码:415 / 427
页数:13
相关论文
共 31 条
  • [1] [Anonymous], 2019, ARXIV
  • [2] Bachman P, 2019, ADV NEUR IN, V32
  • [3] Caputo B., 2014, P INT C CROSS LANG E, P192
  • [4] Chen T., 2021, ARXIV
  • [5] Chen Ting, 2020, INT C MACH LEARN, P1597
  • [6] Chen X., 2020, arXiv preprint arXiv:2003.04297
  • [7] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [8] How Well Do Self-Supervised Models Transfer?
    Ericsson, Linus
    Gouk, Henry
    Hospedales, Timothy M.
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5410 - 5419
  • [9] Ghifary M, 2014, LECT NOTES ARTIF INT, V8862, P898, DOI 10.1007/978-3-319-13560-1_76
  • [10] Grill Jean-Bastien, 2020, Advances in Neural Information Processing Systems, V33, P21271