Open-set domain adaptation with visual-language foundation models

被引:0
|
作者
Yu, Qing [1 ]
Irie, Go [2 ]
Aizawa, Kiyoharu [1 ]
机构
[1] Univ Tokyo, Dept Informat & Commun Engn, Tokyo 1138656, Japan
[2] Tokyo Univ Sci, Dept Informat & Comp Technol, Tokyo 1258585, Japan
关键词
Deep learning; Cross-domain learning; Open-set recognition; Domain adaptation;
D O I
10.1016/j.cviu.2024.104230
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge obtained from a source domain with labeled data to a target domain with unlabeled data. Owing to the lack of labeled data in the target domain and the possible presence of unknown classes, open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase. Although existing ODA approaches aim to solve the distribution shifts between the source and target domains, most methods fine-tuned ImageNet pre-trained models on the source domain with the adaptation on the target domain. Recent visual- language foundation models (VLFM), such as Contrastive Language-Image Pre-Training (CLIP), are robust to many distribution shifts and, therefore, should substantially improve the performance of ODA. In this work, we explore generic ways to adopt CLIP, a popular VLFM, for ODA. We investigate the performance of zero-shot prediction using CLIP, and then propose an entropy optimization strategy to assist the ODA models with the outputs of CLIP. The proposed approach achieves state-of-the-art results on various benchmarks, demonstrating its effectiveness in addressing the ODA problem.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Adversarial Auxiliary Weighted Subdomain Adaptation for Open-Set Deep Transfer Bridge Damage Diagnosis
    Xiao, Haitao
    Dong, Limeng
    Wang, Wenjie
    Ogai, Harutoshi
    SENSORS, 2023, 23 (04)
  • [32] Unknown-Oriented Learning for Open Set Domain Adaptation
    Liu, Jie
    Guo, Xiaoqing
    Yuan, Yixuan
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 334 - 350
  • [33] Manifold Regularized Joint Transfer for Open Set Domain Adaptation
    Liu, Jieyan
    He, Hongcai
    Liu, Mingzhu
    Li, Jingjing
    Lu, Ke
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 9356 - 9369
  • [34] Open-Set Cross-Domain Hyperspectral Image Classification Based on Manifold Mapping Alignment
    Zhang, Xiangrong
    Liu, Baisen
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 6241 - 6252
  • [35] Unsupervised Domain Adaptation of Language Models for Reading Comprehension
    Nishida, Kosuke
    Nishida, Kyosuke
    Saito, Itsumi
    Asano, Hisako
    Tomita, Junji
    PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), 2020, : 5392 - 5399
  • [36] Open-set Classification of Common Waveforms Using A Deep Feed-forward Network and Binary Isolation Forest Models
    Fredieu, C. Tanner
    Martone, Anthony
    Buehrer, R. Michael
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 2465 - 2469
  • [37] Sample separation and domain alignment complementary learning mechanism for open set domain adaptation
    Long, Sifan
    Wang, Shengsheng
    Zhao, Xin
    Fu, Zihao
    Wang, Bilin
    APPLIED INTELLIGENCE, 2023, 53 (15) : 18790 - 18805
  • [38] Sample separation and domain alignment complementary learning mechanism for open set domain adaptation
    Long Sifan
    Wang Shengsheng
    Zhao Xin
    Fu Zihao
    Wang Bilin
    Applied Intelligence, 2023, 53 : 18790 - 18805
  • [39] Open Set Domain Adaptation With Soft Unknown-Class Rejection
    Xu, Yiming
    Chen, Lin
    Duan, Lixin
    Tsang, Ivor W.
    Luo, Jiebo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (03) : 1601 - 1612
  • [40] Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation
    Zhong, Li
    Fang, Zhen
    Liu, Feng
    Yuan, Bo
    Zhang, Guangquan
    Lu, Jie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 3859 - 3873