Multi-source fully test-time adaptation

被引:0
|
作者
Du, Yuntao [1 ]
Luo, Siqi [2 ]
Xin, Yi [2 ]
Chen, Mingcai [2 ]
Feng, Shuai [2 ]
Zhang, Mujie [2 ]
Wang, Chonngjun [2 ]
机构
[1] Beijing Inst Gen Artificial Intelligence BIGAI, Beijing, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Test-time adaptation; Domain adaptation; Transfer learning;
D O I
10.1016/j.neunet.2024.106661
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have significantly advanced various fields. However, these models often encounter difficulties in achieving effective generalization when the distribution of test samples varies from that of the training samples. Recently, some fully test-time adaptation methods have been proposed to adapt the trained model with the unlabeled test samples before prediction to enhance the test performance. Despite achieving remarkable results, these methods only involve one trained model, which could only provide certain side information for the test samples. In real-world scenarios, there could be multiple available trained models that are beneficial to the test samples and are complementary to each other. Consequently, to better utilize these trained models, in this paper, we propose the problem of multi-source fully test-time adaptation to adapt multiple trained models to the test samples. To address this problem, we introduce a simple yet effective method utilizing a weighted aggregation scheme and introduce two unsupervised losses. The former could adaptively assign a higher weight to a more relevant model, while the latter could jointly adapt models with online unlabeled samples. Extensive experiments on three image classification datasets show that the proposed method achieves better results than baseline methods, demonstrating the superiority in adapting to multiple models.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] VPA: Fully Test-Time Visual Prompt Adaptation
    Sun, Jiachen
    Ibrahim, Mark
    Hall, Melissa
    Evtimov, Ivan
    Mao, Z. Morley
    Ferrer, Cristian Canton
    Hazirbas, Caner
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5796 - 5806
  • [2] Fully Test-Time Adaptation for Image Segmentation
    Hu, Minhao
    Song, Tao
    Gu, Yujun
    Luo, Xiangde
    Chen, Jieneng
    Chen, Yinan
    Zhang, Ya
    Zhang, Shaoting
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III, 2021, 12903 : 251 - 260
  • [3] A Comprehensive Survey on Test-Time Adaptation Under Distribution Shifts
    Liang, Jian
    He, Ran
    Tan, Tieniu
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (01) : 31 - 64
  • [4] Distribution Alignment for Fully Test-Time Adaptation with Dynamic Online Data Streams
    Wang, Ziqiang
    Chi, Zhixiang
    Wu, Yanan
    Gu, Li
    Liu, Zhi
    Plataniotis, Konstantinos
    Wang, Yang
    COMPUTER VISION - ECCV 2024, PT XXIV, 2025, 15082 : 332 - 349
  • [5] A survey of multi-source domain adaptation
    Sun, Shiliang
    Shi, Honglei
    Wu, Yuanbin
    INFORMATION FUSION, 2015, 24 : 84 - 92
  • [6] MonoTTA: Fully Test-Time Adaptation for Monocular 3D Object Detection
    Lin, Hongbin
    Zhang, Yifan
    Niu, Shuaicheng
    Cui, Shuguang
    Li, Zhen
    COMPUTER VISION-ECCV 2024, PT XLIV, 2025, 15102 : 96 - 114
  • [7] On the analysis of adaptability in multi-source domain adaptation
    Redko, Ievgen
    Habrard, Amaury
    Sebban, Marc
    MACHINE LEARNING, 2019, 108 (8-9) : 1635 - 1652
  • [8] Multi-Source Contribution Learning for Domain Adaptation
    Li, Keqiuyin
    Lu, Jie
    Zuo, Hua
    Zhang, Guangquan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) : 5293 - 5307
  • [9] Multi-source domain adaptation for image classification
    Karimpour, Morvarid
    Noori Saray, Shiva
    Tahmoresnezhad, Jafar
    Pourmahmood Aghababa, Mohammad
    MACHINE VISION AND APPLICATIONS, 2020, 31 (06)
  • [10] Automatic online multi-source domain adaptation
    Renchunzi, Xie
    Pratama, Mahardhika
    INFORMATION SCIENCES, 2022, 582 : 480 - 494