Augmentation Invariant Training

被引:3
作者
Chen, Weicong [1 ]
Tian, Lu [1 ,2 ]
Fan, Liwen [3 ]
Wang, Yu [1 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Xilinx, San Jose, CA USA
[3] Zhihu, Beijing, Peoples R China
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW) | 2019年
关键词
D O I
10.1109/ICCVW.2019.00358
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Data augmentation is acknowledged to help deep neural networks generalize better, while their augmentation transfer ability is far from satisfactory. The networks perform worse when tested with augmentations not used during training, which is also a manifestation of insufficient generalization ability. To address this problem, we carefully design a novel Augmentation invariant Loss (AiLoss) to assist networks to learn augmentation invariant by minimizing intra-augmentation variation. Based on AiLoss, we propose a simple yet efficient training strategy, Augmentation Invariant Training (AIT), to enhance the generalization ability of networks. Extensive experiments show that AIT can be applied to a variety of network architectures, and consistently improve their performance on CIFAR-10, CIFAR-100 and ImageNet without increasing computational cost. Further extending AIT to multiple networks, we propose multi-AIT to learn inter-network augmentation invariant, which achieves better performance in enhancing generalization ability. Moreover, further experiments present that networks trained with our strategy do obtain better augmentation transfer ability and learn features that are invariant to augmentations. Our source code is available at Github.
引用
收藏
页码:2963 / 2971
页数:9
相关论文
共 27 条
  • [1] [Anonymous], 2016, INSTANCE NORMALIZATI
  • [2] [Anonymous], 2018, AUTOAUGMENT LEARNING
  • [3] [Anonymous], 2016, Gated siamese convolutional neural network architecture for human re-identification
  • [4] [Anonymous], 2014, INT C ARTIF INTELL S
  • [5] [Anonymous], 2017, MEASURING TENDENCY C
  • [6] [Anonymous], 2016, CORR
  • [7] [Anonymous], 2016, BATCH NORMALIZATION, DOI DOI 10.1109/CVPR.2016.90
  • [8] [Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.90
  • [9] [Anonymous], 2016, Deep coral: Correlation alignment for deep domain adaptation
  • [10] Chen Binghui, 2017, NOISY SOFTMAX IMPROV