Latent Feature Disentanglement for Visual Domain Generalization

被引:4
|
作者
Gholami, Behnam [1 ]
El-Khamy, Mostafa [1 ,2 ]
Song, Kee-Bong [1 ]
机构
[1] Samsung Semicond Inc, Samsung Device Solut Res Amer, San Diego, CA 92126 USA
[2] Alexandria Univ, Dept Elect Engn, Alexandria 21544, Egypt
关键词
Domain generalization; latent feature; feature disentanglement; image to image translation; StarGAN; ADVERSARIAL NETWORKS;
D O I
10.1109/TIP.2023.3321511
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite remarkable success in a variety of computer vision applications, it is well-known that deep learning can fail catastrophically when presented with out-of-distribution data, where there are usually style differences between the training and test images. Toward addressing this challenge, we consider the domain generalization problem, wherein predictors are trained using data drawn from a family of related training (source) domains and then evaluated on a distinct and unseen test domain. Naively training a model on the aggregate set of data (pooled from all source domains) has been shown to perform suboptimally, since the information learned by that model might be domain-specific and generalizes imperfectly to test domains. Data augmentation has been shown to be an effective approach to overcome this problem. However, its application has been limited to enforcing invariance to simple transformations like rotation, brightness change, etc. Such perturbations do not necessarily cover plausible real-world variations that preserve the semantics of the input (such as a change in the image style). In this paper, taking the advantage of multiple source domains, we propose a novel approach to express and formalize robustness to these kind of real-world image perturbations. The three key ideas underlying our formulation are (1) leveraging disentangled representations of the images to define different factors of variations, (2) generating perturbed images by changing such factors composing the representations of the images, (3) enforcing the learner (classifier) to be invariant to such changes in the images. We use image-to-image translation models to demonstrate the efficacy of this approach. Based on this, we propose a domain-invariant regularization (DIR) loss function that enforces invariant prediction of targets (class labels) across domains which yields improved generalization performance. We demonstrate the effectiveness of our approach on several widely used datasets for the domain generalization problem, on all of which our results are competitive with the state-of-the-art.
引用
收藏
页码:5751 / 5763
页数:13
相关论文
共 50 条
  • [1] Domain Generalization via Frequency-domain-based Feature Disentanglement and Interaction
    Wang, Jingye
    Du, Ruoyi
    Chang, Dongliang
    Liang, Kongming
    Ma, Zhanyu
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 4821 - 4829
  • [2] Domain generalization via feature disentanglement with reconstruction for pathology image segmentation
    Lin, Yu-Hsuan
    Tsai, Hung-Wen
    Shen, Meng-Ru
    Chung, Pau-Choo
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 152 - 153
  • [3] Dual disentanglement domain generalization method for rotating Machinery fault diagnosis
    Zhang, Guowei
    Kong, Xianguang
    Ma, Hongbo
    Wang, Qibin
    Du, Jingli
    Wang, Jinrui
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2025, 228
  • [4] Region Feature Disentanglement for Domain Adaptive Object Detection
    Wang, Rui
    Wan, Shouhong
    Jin, Peiquan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VII, 2023, 14260 : 175 - 186
  • [5] Unbiased Semantic Representation Learning Based on Causal Disentanglement for Domain Generalization
    Jin, Xuanyu
    Li, Ni
    Kong, Wangzeng
    Tang, Jiajia
    Yang, Bing
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (08)
  • [6] INSURE: An Information Theory iNspired diSentanglement and pURification modEl for Domain Generalization
    Yu, Xi
    Tseng, Huan-Hsin
    Yoo, Shinjae
    Ling, Haibin
    Lin, Yuewei
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 3508 - 3519
  • [7] Domain Adaptation via Feature Disentanglement for cross-domain image classification
    Wu, Zhi-Ze
    Du, Chang-Jiang
    Wang, Xin-Qi
    Zou, Le
    Cheng, Fan
    Li, Teng
    Nian, Fu-Dong
    Weise, Thomas
    Wang, Xiao-Feng
    APPLIED SOFT COMPUTING, 2025, 172
  • [8] Feature Stylization Adversarial Domain Generalization
    Hu, Zhengzhong
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [9] FEATURE DISENTANGLEMENT FOR CROSS-DOMAIN RETINA VESSEL SEGMENTATION
    Wang, Jie
    Zhong, Chaoliang
    Feng, Cheng
    Sun, Jun
    Yokota, Yasuto
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 26 - 30
  • [10] VARIATIONAL FEATURE DISENTANGLEMENT FOR FEW-SHOT DOMAIN ADAPTATION
    Wang, Weiduo
    Gu, Yun
    Yang, Jie
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2860 - 2864