Learning Causal Representations for Robust Domain Adaptation

被引:25
|
作者
Yang, Shuai [1 ]
Yu, Kui [1 ]
Cao, Fuyuan [2 ]
Liu, Lin [3 ]
Wang, Hao [1 ]
Li, Jiuyong [3 ]
机构
[1] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Key Lab Knowledge Engn Big Data, Minist Educ, Hefei 230601, Peoples R China
[2] Shanxi Univ, Sch Comp & Informat Technol, Taiyuan 030006, Peoples R China
[3] Univ South Australia, UniSA STEM, Adelaide, SA 5095, Australia
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Dogs; Data models; Predictive models; Markov processes; Adaptation models; Training; Sentiment analysis; Domain adaptation; causal discovery; autoencoder; FEATURE-SELECTION; RELEVANCE;
D O I
10.1109/TKDE.2021.3119185
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, we investigate a challenging problem, namely, robust domain adaptation, where data from only a single well-labeled source domain are available in the training phase. To address this problem, assuming that the causal relationships between the features and the class variable are robust across domains, we propose a novel causal autoencoder (CAE), which integrates a deep autoencoder and a causal structure learning model to learn causal representations using data from a single source domain. Specifically, a deep autoencoder model is adopted to learn the low-dimensional representations, and a causal structure learning model is designed to separate the low-dimensional representations into two groups: causal representations and task-irrelevant representations. Using three real-world datasets, the experiments have validated the effectiveness of CAE, in comparison with eleven state-of-the-art methods.
引用
收藏
页码:2750 / 2764
页数:15
相关论文
共 50 条
  • [11] Domain Adaptation via Prompt Learning
    Ge, Chunjiang
    Huang, Rui
    Xie, Mixue
    Lai, Zihang
    Song, Shiji
    Li, Shuang
    Huang, Gao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1160 - 1170
  • [12] Learning smooth representations with generalized softmax for unsupervised domain adaptation
    Han, Chao
    Lei, Yu
    Xie, Yu
    Zhou, Deyun
    Gong, Maoguo
    INFORMATION SCIENCES, 2021, 544 : 415 - 426
  • [13] Confident Learning-Based Domain Adaptation for Hyperspectral Image Classification
    Fang, Zhuoqun
    Yang, Yuexin
    Li, Zhaokui
    Li, Wei
    Chen, Yushi
    Ma, Li
    Du, Qian
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [14] Discriminativeness-Preserved Domain Adaptation for Few-Shot Learning
    Liu, Guangzhen
    Lu, Zhiwu
    IEEE ACCESS, 2020, 8 : 168405 - 168413
  • [15] Guide Subspace Learning for Unsupervised Domain Adaptation
    Zhang, Lei
    Fu, Jingru
    Wang, Shanshan
    Zhang, David
    Dong, Zhaoyang
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (09) : 3374 - 3388
  • [16] Robust Multimodal Learning With Missing Modalities via Parameter-Efficient Adaptation
    Reza, Md Kaykobad
    Prater-Bennette, Ashley
    Asif, M. Salman
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (02) : 742 - 754
  • [17] Representation learning via serial robust autoencoder for domain adaptation
    Yang, Shuai
    Zhang, Yuhong
    Wang, Hao
    Li, Peipei
    Hu, Xuegang
    EXPERT SYSTEMS WITH APPLICATIONS, 2020, 160
  • [18] Transferable Feature Selection for Unsupervised Domain Adaptation
    Yan, Yuguang
    Wu, Hanrui
    Ye, Yuzhong
    Bi, Chaoyang
    Lu, Min
    Liu, Dapeng
    Wu, Qingyao
    Ng, Michael K.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (11) : 5536 - 5551
  • [19] DCL: Dipolar Confidence Learning for Source-Free Unsupervised Domain Adaptation
    Tian, Qing
    Sun, Heyang
    Peng, Shun
    Zheng, Yuhui
    Wan, Jun
    Lei, Zhen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4342 - 4353
  • [20] Reinforced Adaptation Network for Partial Domain Adaptation
    Wu, Keyu
    Wu, Min
    Chen, Zhenghua
    Jin, Ruibing
    Cui, Wei
    Cao, Zhiguang
    Li, Xiaoli
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (05) : 2370 - 2380