StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning

被引:42
作者
Fu, Yuqian [1 ]
Xie, Yu [2 ]
Fu, Yanwei [3 ]
Jiang, Yu-Gang [1 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[2] Purple Mt Labs, Nanjing, Peoples R China
[3] Fudan Univ, Sch Data Sci, Shanghai, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
基金
国家重点研发计划;
关键词
D O I
10.1109/CVPR52729.2023.02354
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains. It aims at transferring prior knowledge learned on the source dataset to novel target datasets. The CD-FSL task is especially challenged by the huge domain gap between different datasets. Critically, such a domain gap actually comes from the changes of visual styles, and wave-SAN [10] empirically shows that spanning the style distribution of the source data helps alleviate this issue. However, wave-SAN simply swaps styles of two images. Such a vanilla operation makes the generated styles "real" and "easy", which still fall into the original set of the source styles. Thus, inspired by vanilla adversarial learning, a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method is proposed for CD-FSL. Particularly, our style attack method synthesizes both "virtual" and "hard" adversarial styles for model training. This is achieved by perturbing the original style with the signed style gradients. By continually attacking styles and forcing the model to recognize these challenging adversarial styles, our model is gradually robust to the visual styles, thus boosting the generalization ability for novel target datasets. Besides the typical CNN-based backbone, we also employ our StyleAdv method on large-scale pre-trained vision transformer. Extensive experiments conducted on eight various target datasets show the effectiveness of our method. Whether built upon ResNet or ViT, we achieve the new state of the art for CD-FSL. Code is available at https://github.com/lovelyqian/StyleAdv-CDFSL.
引用
收藏
页码:24575 / 24584
页数:10
相关论文
共 67 条
[51]  
Tseng H.-Y., 2020, P INT C LEARN REPR
[52]   The iNaturalist Species Classification and Detection Dataset [J].
Van Horn, Grant ;
Mac Aodha, Oisin ;
Song, Yang ;
Cui, Yin ;
Sun, Chen ;
Shepard, Alex ;
Adam, Hartwig ;
Perona, Pietro ;
Belongie, Serge .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8769-8778
[53]  
Wang Haoqing, 2021, Cross-domain few-shot classification via adversarial task augmentation
[54]  
Wang ZL, 2021, METHODS MOL BIOL, V2217, P3, DOI 10.1007/978-1-0716-0962-0_1
[55]  
Xie Cihang, 2020, CVPR
[56]  
Xu Qiuling, 2021, AAAI
[57]   MULTI-FEATURE FUSION EMOTION RECOGNITION BASED ON RESTING EEG [J].
Zhang, Jun-An ;
Gu, Liping ;
Chen, Yongqiang ;
Zhu, Geng ;
Ou, Lang ;
Wang, Liyan ;
Li, Xiaoou ;
Zhong, Lichang .
JOURNAL OF MECHANICS IN MEDICINE AND BIOLOGY, 2022, 22 (03)
[58]  
Zhang Renrui, 2021, Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling, P2
[59]  
[Zhang Xianzhou 张宪洲], 2017, [Journal of Resources and Ecology, 资源与生态学报], V8, P5
[60]  
ZHENG Hao, ICLR