ProD: Prompting-to-disentangle Domain Knowledge for Cross-domain Few-shot Image Classification

被引:19
作者
Ma, Tianyi [1 ,2 ]
Sun, Yifan [2 ]
Yang, Zongxin [3 ]
Yang, Yi [3 ]
机构
[1] Univ Technol Sydney, Ultimo, Australia
[2] Baidu Inc, Beijing, Peoples R China
[3] Zhejiang Univ, Hangzhou, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.01892
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper considers few-shot image classification under the cross-domain scenario, where the train-to-test domain gap compromises classification accuracy. To mitigate the domain gap, we propose a prompting-to-disentangle (ProD) method through a novel exploration with the prompting mechanism. ProD adopts the popular multi-domain training scheme and extracts the backbone feature with a standard Convolutional Neural Network. Based on these two common practices, the key point of ProD is using the prompting mechanism in the transformer to disentangle the domain-general (DG) and domain-specific (DS) knowledge from the backbone feature. Specifically, ProD concatenates a DG and a DS prompt to the backbone feature and feeds them into a lightweight transformer. The DG prompt is learnable and shared by all the training domains, while the DS prompt is generated from the domain-of-interest on the fly. As a result, the transformer outputs DG and DS features in parallel with the two prompts, yielding the disentangling effect. We show that: 1) Simply sharing a single DG prompt for all the training domains already improves generalization towards the novel test domain. 2) The cross-domain generalization can be further reinforced by making the DG prompt neutral towards the training domains. 3) When inference, the DS prompt is generated from the support samples and can capture test domain knowledge through the prompting mechanism. Combining all three benefits, ProD significantly improves cross-domain few-shot classification. For instance, on CUB, ProD improves the 5-way 5-shot accuracy from 73.56% (baseline) to 79.19%, setting a new state of the art.
引用
收藏
页码:19754 / 19763
页数:10
相关论文
共 43 条
[1]  
Ahuja Kartik, 2020, Invariant risk minimization games, P3
[2]  
Aljundi Rahaf, 2016, LIGHTWEIGHT UNSUPERV
[3]  
[Anonymous], 2019, POLYM CRYST
[4]  
Arjovsky Martin, 2019, Invariant risk minimization, P1
[5]  
Branson S, 2010, LECT NOTES COMPUT SC, V6314, P438, DOI 10.1007/978-3-642-15561-1_32
[6]  
Brown T, 2020, ROUTL RES TEACHER ED, P3
[7]  
Chen M., 2012, P 29 INT COF INT C M
[8]  
Dosovitskiy A., 2020, ICLR 2021
[9]  
Dubey Abhimanyu, 2021, ADAPTIVE METHODS REA
[10]  
Finn C, 2017, PR MACH LEARN RES, V70