Iterative Soft Prompt-Tuning for Unsupervised Domain Adaptation

被引:0
|
作者
Zhu, Yi [1 ]
Wang, Shuqin [1 ]
Qiang, Jipeng [1 ]
Wu, Xindong [2 ]
机构
[1] Yangzhou Univ, Dept Informat Engn, Yangzhou 225127, Peoples R China
[2] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Minist Educ China, Hefei 230009, Peoples R China
基金
中国国家自然科学基金;
关键词
Adaptation models; Feature extraction; Training; Iterative methods; Data models; Computational modeling; Semantics; Few shot learning; Tuning; Training data; Iterative soft prompt-Tuning; prompt-tuning; soft prompt; unsupervised domain adaptation; verbalizer;
D O I
10.1109/TKDE.2024.3483903
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation aims to facilitate learning tasks in unlabeled target domain with knowledge in the related source domain, which has achieved awesome performance with the pre-trained language models (PLMs). Recently, inspired by GPT, the prompt-tuning model has been widely explored in stimulating rich knowledge in PLMs for language understanding. However, existing prompt-tuning methods still directly applied the model that was learned in the source domain into the target domain to minimize the discrepancy between different domains, e.g., the prompts or the template are trained separately to learn embeddings for transferring to the target domain, which is actually the intuition of end-to-end deep-based approach. In this paper, we propose an Iterative Soft Prompt-Tuning method (ItSPT) for better unsupervised domain adaptation. On the one hand, the prompt-tuning model learned in the source domain is converted into an iterative model to find the true label information in the target domain, the domain adaptation method is then regarded as a few-shot learning task. On the other hand, instead of hand-crafted templates, ItSPT adopts soft prompts for both considering the automatic template generation and classification performance. Experiments on both English and Chinese datasets demonstrate that our method surpasses the performance of SOTA methods.
引用
收藏
页码:8580 / 8592
页数:13
相关论文
共 50 条
  • [1] Convert Cross-Domain Classification Into Few-Shot Learning: A Unified Prompt-Tuning Framework for Unsupervised Domain Adaptation
    Zhu, Yi
    Shen, Hui
    Li, Yun
    Qiang, Jipeng
    Yuan, Yunhao
    Wu, Xindong
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 810 - 821
  • [2] Soft prompt-tuning for unsupervised domain adaptation via self-supervision
    Zhu, Yi
    Wang, Shuqin
    Li, Yun
    Yuan, Yunhao
    Qiang, Jipeng
    NEUROCOMPUTING, 2025, 617
  • [3] When Adversarial Training Meets Prompt Tuning: Adversarial Dual Prompt Tuning for Unsupervised Domain Adaptation
    Cui, Chaoran
    Liu, Ziyi
    Gong, Shuai
    Zhu, Lei
    Zhang, Chunyun
    Liu, Hui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1427 - 1440
  • [4] Domain Prompt Tuning via Meta Relabeling for Unsupervised Adversarial Adaptation
    Jin, Xin
    Lan, Cuiling
    Zeng, Wenjun
    Chen, Zhibo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 8333 - 8347
  • [5] Domain adaptation for textual adversarial defense via prompt-tuning
    Li, Zhenglong
    Zhu, Yi
    Hua, Chenqi
    Li, Yun
    Yuan, Yunhao
    Qiang, Jipeng
    NEUROCOMPUTING, 2025, 620
  • [6] Short text classification with Soft Knowledgeable Prompt-tuning
    Zhu, Yi
    Wang, Ye
    Mu, Jianyuan
    Li, Yun
    Qiang, Jipeng
    Yuan, Yunhao
    Wu, Xindong
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 246
  • [7] Unsupervised Domain Adaptation Enhanced by Fuzzy Prompt Learning
    Shi, Kuo
    Lu, Jie
    Fang, Zhen
    Zhang, Guangquan
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (07) : 4038 - 4048
  • [8] Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks
    Li, Jingjing
    Du, Zhekai
    Zhu, Lei
    Ding, Zhengming
    Lu, Ke
    Shen, Heng Tao
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 8196 - 8211
  • [9] Multi-View Prototypical Transport for Unsupervised Domain Adaptation
    Lee, Sunhyeok
    Kim, Dae-Shik
    IEEE ACCESS, 2025, 13 : 8482 - 8494
  • [10] Soft Prompt-tuning with Self-Resource Verbalizer for short text streams
    Zhu, Yi
    Wang, Ye
    Li, Yun
    Qiang, Jipeng
    Yuan, Yunhao
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139