Respecting Domain Relations: Hypothesis Invariance for Domain Generalization

被引:21
|
作者
Wang, Ziqi [1 ]
Loog, Marco [1 ,2 ]
van Gemert, Jan [1 ]
机构
[1] Delft Univ Technol, Delft, Netherlands
[2] Univ Copenhagen, Copenhagen, Denmark
来源
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) | 2021年
关键词
Domain generalization; invariant representation;
D O I
10.1109/ICPR48806.2021.9412797
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In domain generalization, multiple labeled nonindependent and non-identically distributed source domains are available during training while neither the data nor the labels of target domains are. Currently, learning so-called domain invariant representations (DIRs) is the prevalent approach to domain generalization. In this work, we define DIRs employed by existing works in probabilistic terms and show that by learning DIRs, overly strict requirements are imposed concerning the invariance. Particularly, DIRs aim to perfectly align representations of different domains, i.e. their input distributions. This is, however, not necessary for good generalization to a target domain and may even dispose of valuable classification information. We propose to learn so-called hypothesis invariant representations (HIRs), which relax the invariance assumptions by merely aligning posteriors, instead of aligning representations. We report experimental results on public domain generalization datasets to show that learning HIRs is more effective than learning DIRs. In fact, our approach can even compete with approaches using prior knowledge about domains.
引用
收藏
页码:9756 / 9763
页数:8
相关论文
共 50 条
  • [1] Failure to Achieve Domain Invariance With Domain Generalization Algorithms: An Analysis in Medical Imaging
    Korevaar, Steven
    Tennakoon, Ruwan
    Bab-Hadiashar, Alireza
    IEEE ACCESS, 2023, 11 : 39351 - 39372
  • [2] On the benefits of representation regularization in invariance based domain generalization
    Changjian Shui
    Boyu Wang
    Christian Gagné
    Machine Learning, 2022, 111 : 895 - 915
  • [3] On the benefits of representation regularization in invariance based domain generalization
    Shui, Changjian
    Wang, Boyu
    Gagne, Christian
    MACHINE LEARNING, 2022, 111 (03) : 895 - 915
  • [4] On the Importance of Attention and Augmentations for Hypothesis Transfer in Domain Adaptation and Generalization
    Thomas, Georgi
    Sahay, Rajat
    Jahan, Chowdhury Sadman
    Manjrekar, Mihir
    Popp, Dan
    Savakis, Andreas
    SENSORS, 2023, 23 (20)
  • [5] Learning generalized visual relations for domain generalization semantic segmentation
    Li, Zijun
    Liao, Muxin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 267
  • [6] Domain Generalization with Interpolation Robustness
    Palakkadavath, Ragja
    Thanh Nguyen-Tang
    Le, Hung
    Venkatesh, Svetha
    Gupta, Sunil
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222
  • [7] Semi-supervised incremental domain generalization learning based on causal invariance
    Wang, Ning
    Wang, Huiling
    Yang, Shaocong
    Chu, Huan
    Dong, Shi
    Viriyasitavat, Wattana
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (10) : 4815 - 4828
  • [8] Cross-Domain Gated Learning for Domain Generalization
    Dapeng Du
    Jiawei Chen
    Yuexiang Li
    Kai Ma
    Gangshan Wu
    Yefeng Zheng
    Limin Wang
    International Journal of Computer Vision, 2022, 130 : 2842 - 2857
  • [9] Inter-domain curriculum learning for domain generalization
    Kim, Daehee
    Kim, Jinkyu
    Lee, Jaekoo
    ICT EXPRESS, 2022, 8 (02): : 225 - 229
  • [10] Domain-Specific Risk Minimization for Domain Generalization
    Zhang, Yi-Fan
    Wang, Jindong
    Liang, Jian
    Zhang, Zhang
    Yu, Baosheng
    Wang, Liang
    Tao, Dacheng
    Xie, Xing
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3409 - 3421