Domain Generalization with Interpolation Robustness

被引:0
|
作者
Palakkadavath, Ragja [1 ]
Thanh Nguyen-Tang [2 ]
Le, Hung [1 ]
Venkatesh, Svetha [1 ]
Gupta, Sunil [1 ]
机构
[1] Deakin Univ, Appl Artificial Intelligence Inst, Geelong, Vic, Australia
[2] Johns Hopkins Univ, Whiting Sch Engn, Baltimore, MD 21218 USA
来源
ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222 | 2023年 / 222卷
基金
澳大利亚研究理事会;
关键词
domain generalization; limited data; robustness; latent interpolation; invariant representation;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain generalization (DG) uses multiple source (training) domains to learn a model that generalizes well to unseen domains. Existing approaches to DG need more scrutiny over (i) the ability to imagine data beyond the source domains and (ii) the ability to cope with the scarcity of training data. To address these shortcomings, we propose a novel framework - interpolation robustness, where we view each training domain as a point on a domain manifold and learn class-specific representations that are domain invariant across all interpolations between domains. We use this representation to propose a generic domain generalization approach that can be seamlessly combined with many state-of-the-art methods in DG. Through extensive experiments, we show that our approach can enhance the performance of several methods in the conventional and the limited training data setting.
引用
收藏
页数:16
相关论文
共 50 条