Neighborhood-Aware Mutual Information Maximization for Source-Free Domain Adaptation

被引:1
作者
Zhang, Lin [1 ]
Wang, Yifan [1 ]
Song, Ran [1 ]
Zhang, Mingxin [1 ]
Li, Xiaolei [1 ]
Zhang, Wei [1 ]
机构
[1] Shandong Univ, Sch Control Sci & Engn, Jinan 250100, Peoples R China
基金
中国国家自然科学基金;
关键词
Mutual information; Noise measurement; Feature extraction; Noise; Adaptation models; Training; Self-supervised learning; Source-free domain adaptation; mutual information; domain adaptation; unsupervised learning;
D O I
10.1109/TMM.2024.3394971
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, the source-free domain adaptation (SFDA) problem has attracted much attention, where the pre-trained model for the source domain is adapted to the target domain in the absence of source data. However, due to domain shift, the negative alignment usually exists between samples from the same class, which may lower intra-class feature similarity. To address this issue, we present a self-supervised representation learning strategy for SFDA, named as neighborhood-aware mutual information (NAMI), which maximizes the mutual information (MI) between the representations of target samples and their corresponding neighbors. Moreover, we theoretically demonstrate that NAMI can be decomposed into a weighted sum of local MI, which suggests that the weighted terms can better estimate NAMI. To this end, we introduce neighborhood consensus score over the set of weakly and strongly augmented views and point-wise density based on neighborhood, both of which determine the weights of local MI for NAMI by leveraging the neighborhood information of samples. The proposed method can significantly handle domain shift and adaptively reduce the noise in the neighborhood of each target sample. In combination with the consistency loss over views, NAMI leads to consistent improvement over existing state-of-the-art methods on three popular SFDA benchmarks.
引用
收藏
页码:9564 / 9574
页数:11
相关论文
共 54 条
  • [1] Belghazi MI, 2018, PR MACH LEARN RES, V80
  • [2] Contrastive Test-Time Adaptation
    Chen, Dian
    Wang, Dequan
    Darrell, Trevor
    Ibrahimi, Sayna
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 295 - 305
  • [3] Chen T, 2020, PR MACH LEARN RES, V119
  • [4] Chu T, 2022, AAAI CONF ARTIF INTE, P472
  • [5] Randaugment: Practical automated data augmentation with a reduced search space
    Cubuk, Ekin D.
    Zoph, Barret
    Shlens, Jonathon
    Le, Quoc, V
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3008 - 3017
  • [6] Source-Free Domain Adaptation via Distribution Estimation
    Ding, Ning
    Xu, Yixing
    Tang, Yehui
    Xu, Chao
    Wang, Yunhe
    Tao, Dacheng
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7202 - 7212
  • [7] Gu X, 2020, PROC CVPR IEEE, P9098, DOI 10.1109/CVPR42600.2020.00912
  • [8] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [9] Hjelm R. D., 2019, P 7 INT C LEARN REPR
  • [10] Huang Jiabo, 2019, P MACHINE LEARNING R, V97