Open-set domain adaptation with visual-language foundation models

被引:0
|
作者
Yu, Qing [1 ]
Irie, Go [2 ]
Aizawa, Kiyoharu [1 ]
机构
[1] Univ Tokyo, Dept Informat & Commun Engn, Tokyo 1138656, Japan
[2] Tokyo Univ Sci, Dept Informat & Comp Technol, Tokyo 1258585, Japan
关键词
Deep learning; Cross-domain learning; Open-set recognition; Domain adaptation;
D O I
10.1016/j.cviu.2024.104230
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge obtained from a source domain with labeled data to a target domain with unlabeled data. Owing to the lack of labeled data in the target domain and the possible presence of unknown classes, open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase. Although existing ODA approaches aim to solve the distribution shifts between the source and target domains, most methods fine-tuned ImageNet pre-trained models on the source domain with the adaptation on the target domain. Recent visual- language foundation models (VLFM), such as Contrastive Language-Image Pre-Training (CLIP), are robust to many distribution shifts and, therefore, should substantially improve the performance of ODA. In this work, we explore generic ways to adopt CLIP, a popular VLFM, for ODA. We investigate the performance of zero-shot prediction using CLIP, and then propose an entropy optimization strategy to assist the ODA models with the outputs of CLIP. The proposed approach achieves state-of-the-art results on various benchmarks, demonstrating its effectiveness in addressing the ODA problem.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Visual-language foundation models in medicine
    Liu, Chunyu
    Jin, Yixiao
    Guan, Zhouyu
    Li, Tingyao
    Qin, Yiming
    Qian, Bo
    Jiang, Zehua
    Wu, Yilan
    Wang, Xiangning
    Zheng, Ying Feng
    Zeng, Dian
    VISUAL COMPUTER, 2025, 41 (04) : 2953 - 2972
  • [2] Domain Adaptation with Dynamic Open-Set Targets
    Wu, Jun
    He, Jingrui
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2039 - 2049
  • [3] Self-Labeling Framework for Open-Set Domain Adaptation With Few Labeled Samples
    Yu, Qing
    Irie, Go
    Aizawa, Kiyoharu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 1474 - 1487
  • [4] Extending Partial Domain Adaptation Algorithms to the Open-Set Setting
    Pikramenos, George
    Spyrou, Evaggelos
    Perantonis, Stavros J.
    APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [5] GRAPH NEURAL NETWORK BASED OPEN-SET DOMAIN ADAPTATION
    Zhao, Shan
    Saha, Sudipan
    Zhu, Xiao Xiang
    XXIV ISPRS CONGRESS: IMAGING TODAY, FORESEEING TOMORROW, COMMISSION III, 2022, 43-B3 : 1407 - 1413
  • [6] Deep Open-Set Segmentation in Visual Learning
    Nunes, Ian M.
    Poggi, Marcus
    Oliveira, Hugo
    Pereira, Matheus B.
    dos Santos, Jefersson A.
    2022 35TH SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI 2022), 2022, : 314 - 319
  • [7] Reserve to Adapt: Mining Inter-Class Relations for Open-Set Domain Adaptation
    Tong, Yujun
    Chang, Dongliang
    Li, Da
    Wang, Xinran
    Liang, Kongming
    He, Zhongjiang
    Song, Yi-Zhe
    Ma, Zhanyu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1382 - 1397
  • [8] Source-Free Progressive Graph Learning for Open-Set Domain Adaptation
    Luo, Yadan
    Wang, Zijian
    Chen, Zhuoxiao
    Huang, Zi
    Baktashmotlagh, Mahsa
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) : 11240 - 11255
  • [9] Open-set federated adversarial domain adaptation based cross-domain fault diagnosis
    Xu, Shu
    Ma, Jian
    Song, Dengwei
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2023, 34 (11)
  • [10] Fine-Grained Open-Set Deepfake Detection via Unsupervised Domain Adaptation
    Zhou, Xinye
    Han, Hu
    Shan, Shiguang
    Chen, Xilin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 7536 - 7547