Convert Cross-Domain Classification Into Few-Shot Learning: A Unified Prompt-Tuning Framework for Unsupervised Domain Adaptation

被引:1
作者
Zhu, Yi [1 ]
Shen, Hui [1 ]
Li, Yun [1 ]
Qiang, Jipeng [1 ]
Yuan, Yunhao [1 ]
Wu, Xindong [2 ]
机构
[1] Yangzhou Univ, Sch Informat Engn, Yangzhou 225009, Peoples R China
[2] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Minist Educ China, Hefei 230601, Peoples R China
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2025年 / 9卷 / 01期
基金
中国国家自然科学基金;
关键词
Adaptation models; Task analysis; Predictive models; Data models; Representation learning; Recurrent neural networks; Iterative methods; Cross-domain classification; few-shot learning; prompt-tuning; unsupervised domain adaptation;
D O I
10.1109/TETCI.2024.3412998
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation aims to facilitate learning tasks within an unlabeled target domain by leveraging labeled information from the source domain. To address the domain discrepancy, the majority of existing methods focus on learning latent features in both the source and target domains. However, the essential challenge in unsupervised domain adaptation lies in locating labeled information, specifically the true labels of the data, within the target domain. In this paper, we propose a unified prompt-tuning framework that converts cross-domain classification into few-shot learning, effectively integrating diverse cross-domain classification tasks into an iterative few-shot learning paradigm. The framework unifies all predicted pseudo labels, templates, and iterated models into a cohesive prompt-tuning model, exhibiting robust scalability for integrating other tuning modalities and offering high availability to minimize the necessity for extensive fine-tuning. Through extensive experiments on several well-known benchmark datasets, we validate the superior performance of our framework compared to other domain adaptation and fine-tuning PLMs methods, which achieves up to an impressive 99% predicting accuracy of true labels within the target domain.
引用
收藏
页码:810 / 821
页数:12
相关论文
共 66 条
[1]  
Adiwardana D., 2020, arXiv
[2]  
Bao R., 2020, ARXIV
[3]  
Brown TB, 2020, ADV NEUR IN, V33
[4]  
Cai RC, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2060
[5]   MultiDIAL: Domain Alignment Layers for (Multisource) Unsupervised Domain Adaptation [J].
Carlucci, Fabio Maria ;
Porzi, Lorenzo ;
Caputo, Barbara ;
Ricci, Elisa ;
Bulo, Samuel Rota .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (12) :4441-4452
[6]  
Chen Minmin., 2012, P 29 INT COFERENCE I, P1627, DOI 10.5555/3042573.3042781
[7]   Co-training Disentangled Domain Adaptation Network for Leveraging Popularity Bias in Recommenders [J].
Chen, Zhihong ;
Wu, Jiawei ;
Li, Chenliang ;
Chen, Jingxu ;
Xiao, Rong ;
Zhao, Binqiang .
PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, :60-69
[8]   An Empirical Comparison of Domain Adaptation Methods for Neural Machine Translation [J].
Chu, Chenhui ;
Dabre, Raj ;
Kurohashi, Sadao .
PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 2, 2017, :385-391
[9]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[10]  
Ding N., 2021, P FIND ASS COMP LING, P6888