A Closer Look at Classifier in Adversarial Domain Generalization

被引:4
|
作者
Wang, Ye [1 ]
Chen, Junyang [2 ]
Wang, Mengzhu [3 ]
Li, Hao [1 ]
Wang, Wei [4 ,6 ]
Su, Houcheng [5 ]
Lai, Zhihui [2 ]
Wang, Wei [4 ,6 ]
Chen, Zhenghan [7 ]
机构
[1] Natl Univ Def Technol, Changsha, Hunan, Peoples R China
[2] Shenzhen Univ, Shenzhen, Guangdong, Peoples R China
[3] Hefei Univ Technol, Hefei, Anhui, Peoples R China
[4] Sun Yat Sen Univ, Shenzhen Campus, Shenzhen, Guangdong, Peoples R China
[5] Univ Macau, Taipa, Macao, Peoples R China
[6] Shenzhen MSU BIT Univ, Shenzhen, Guangdong, Peoples R China
[7] Peking Univ, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
关键词
domain generalization; condition-invariant features; smoothing optima;
D O I
10.1145/3581783.3611743
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The task of domain generalization is to learn a classification model from multiple source domains and generalize it to unknown target domains. The key to domain generalization is learning discriminative domain-invariant features. Invariant representations are achieved using adversarial domain generalization as one of the primary techniques. For example, generative adversarial networks have been widely used, but suffer from the problem of low intra-class diversity, which can lead to poor generalization ability. To address this issue, we propose a new method called auxiliary classifier in adversarial domain generalization (CloCls). CloCls improve the diversity of the source domain by introducing auxiliary classifier. Combining typical task-related losses, e.g., cross-entropy loss for classification and adversarial loss for domain discrimination, our overall goal is to guarantee the learning of condition-invariant features for all source domains while increasing the diversity of source domains. Further, inspired by smoothing optima have improved generalization for supervised learning tasks like classification. We leverage that converging to a smooth minima with respect task loss stabilizes the adversarial training leading to better performance on unseen target domain which can effectively enhances the performance of domain adversarial methods. We have conducted extensive image classification experiments on benchmark datasets in domain generalization, and our model exhibits sufficient generalization ability and outperforms state-of-the-art DG methods.
引用
收藏
页码:280 / 289
页数:10
相关论文
共 50 条
  • [41] Domain Generalization by Functional Regression
    Holzleitner, Markus
    Pereverzyev, Sergei V.
    Zellinger, Werner
    NUMERICAL FUNCTIONAL ANALYSIS AND OPTIMIZATION, 2024, 45 (03) : 259 - 281
  • [42] Domain-Specific Risk Minimization for Domain Generalization
    Zhang, Yi-Fan
    Wang, Jindong
    Liang, Jian
    Zhang, Zhang
    Yu, Baosheng
    Wang, Liang
    Tao, Dacheng
    Xie, Xing
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3409 - 3421
  • [43] Domain-aware triplet loss in domain generalization
    Guo, Kaiyu
    Lovell, Brian C.
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 243
  • [44] Cross-Domain Gated Learning for Domain Generalization
    Dapeng Du
    Jiawei Chen
    Yuexiang Li
    Kai Ma
    Gangshan Wu
    Yefeng Zheng
    Limin Wang
    International Journal of Computer Vision, 2022, 130 : 2842 - 2857
  • [45] Inter-domain curriculum learning for domain generalization
    Kim, Daehee
    Kim, Jinkyu
    Lee, Jaekoo
    ICT EXPRESS, 2022, 8 (02): : 225 - 229
  • [46] Domain Attention Model for Domain Generalization in Object Detection
    He, Weixiong
    Zheng, Huicheng
    Lai, Jianhuang
    PATTERN RECOGNITION AND COMPUTER VISION (PRCV 2018), PT IV, 2018, 11259 : 27 - 39
  • [47] Cross-domain Ensemble Distillation for Domain Generalization
    Lee, Kyungmoon
    Kim, Sungyeon
    Kwak, Suha
    COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 1 - 20
  • [48] A temporal domain generalization method for PM2.5 concentration prediction based on adversarial training and deep variational information bottleneck
    Shan, Miaoxuan
    Ye, Chunlin
    Chen, Peng
    Peng, Shufan
    ATMOSPHERIC POLLUTION RESEARCH, 2025, 16 (05)
  • [49] Cross-Domain Gated Learning for Domain Generalization
    Du, Dapeng
    Chen, Jiawei
    Li, Yuexiang
    Ma, Kai
    Wu, Gangshan
    Zheng, Yefeng
    Wang, Limin
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (11) : 2842 - 2857
  • [50] Respecting Domain Relations: Hypothesis Invariance for Domain Generalization
    Wang, Ziqi
    Loog, Marco
    van Gemert, Jan
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9756 - 9763