Learning Causal Representations for Robust Domain Adaptation

被引:25
|
作者
Yang, Shuai [1 ]
Yu, Kui [1 ]
Cao, Fuyuan [2 ]
Liu, Lin [3 ]
Wang, Hao [1 ]
Li, Jiuyong [3 ]
机构
[1] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Key Lab Knowledge Engn Big Data, Minist Educ, Hefei 230601, Peoples R China
[2] Shanxi Univ, Sch Comp & Informat Technol, Taiyuan 030006, Peoples R China
[3] Univ South Australia, UniSA STEM, Adelaide, SA 5095, Australia
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Dogs; Data models; Predictive models; Markov processes; Adaptation models; Training; Sentiment analysis; Domain adaptation; causal discovery; autoencoder; FEATURE-SELECTION; RELEVANCE;
D O I
10.1109/TKDE.2021.3119185
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, we investigate a challenging problem, namely, robust domain adaptation, where data from only a single well-labeled source domain are available in the training phase. To address this problem, assuming that the causal relationships between the features and the class variable are robust across domains, we propose a novel causal autoencoder (CAE), which integrates a deep autoencoder and a causal structure learning model to learn causal representations using data from a single source domain. Specifically, a deep autoencoder model is adopted to learn the low-dimensional representations, and a causal structure learning model is designed to separate the low-dimensional representations into two groups: causal representations and task-irrelevant representations. Using three real-world datasets, the experiments have validated the effectiveness of CAE, in comparison with eleven state-of-the-art methods.
引用
收藏
页码:2750 / 2764
页数:15
相关论文
共 50 条
  • [41] Progressive Transfer Learning and Adversarial Domain Adaptation for Cross-Domain Skin Disease Classification
    Gu, Yanyang
    Ge, Zongyuan
    Bonnington, C. Paul
    Zhou, Jun
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2020, 24 (05) : 1379 - 1393
  • [42] Heterogeneous Domain Adaptation: An Unsupervised Approach
    Liu, Feng
    Zhang, Guangquan
    Lu, Jie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) : 5588 - 5602
  • [43] Domain Adaptation With Neural Embedding Matching
    Wang, Zengmao
    Du, Bo
    Guo, Yuhong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (07) : 2387 - 2397
  • [44] Privacy-Preserving Federated Learning With Domain Adaptation for Multi-Disease Ocular Disease Recognition
    Tang, Zhiri
    Wong, Hau-San
    Yu, Zekuan
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (06) : 3219 - 3227
  • [45] A Simple but Effective Way to Handle Rotating Machine Fault Diagnosis With Imbalanced-Class Data: Repetitive Learning Using an Advanced Domain Adaptation Model
    Yoo, Donghwi
    Choi, Minseok
    Oh, Hyunseok
    Han, Bongtae
    IEEE ACCESS, 2024, 12 : 189789 - 189803
  • [46] Learning Robust Representations by Autoencoders With Dynamical Implicit Mapping
    Zeng, Jianda
    Jiang, Weili
    Yi, Zhang
    Shi, Yong-Guo
    Wang, Jianyong
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 1056 - 1060
  • [47] Robust Text Image Recognition via Adversarial Sequence-to-Sequence Domain Adaptation
    Zhang, Yaping
    Nie, Shuai
    Liang, Shan
    Liu, Wenju
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 3922 - 3933
  • [48] Robust Cross-Domain Pseudo-Labeling and Contrastive Learning for Unsupervised Domain Adaptation NIR-VIS Face Recognition
    Yang, Yiming
    Hu, Weipeng
    Lin, Haiqi
    Hu, Haifeng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 5231 - 5244
  • [49] Meta-Knowledge Learning and Domain Adaptation for Unseen Background Subtraction
    Zhang, Jin
    Zhang, Xi
    Zhang, Yanyan
    Duan, Yexin
    Li, Yang
    Pan, Zhisong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 9058 - 9068
  • [50] Multi-Source Collaborative Contrastive Learning for Decentralized Domain Adaptation
    Wei, Yikang
    Yang, Liu
    Han, Yahong
    Hu, Qinghua
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (05) : 2202 - 2216