Learning to Weight Samples for Dynamic Early-Exiting Networks

被引:22
作者
Han, Yizeng [1 ]
Pu, Yifan [1 ]
Lai, Zihang [1 ,2 ]
Wang, Chaofei [1 ]
Song, Shiji [1 ]
Cao, Junfeng [3 ]
Huang, Wenhui [3 ]
Deng, Chao [3 ]
Huang, Gao [1 ]
机构
[1] Tsinghua Univ, Beijing 100084, Peoples R China
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] China Mobile Res Inst, Beijing 100084, Peoples R China
来源
COMPUTER VISION, ECCV 2022, PT XI | 2022年 / 13671卷
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Sample weighting; Dynamic early exiting; Meta-learning;
D O I
10.1007/978-3-031-20083-0_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Early exiting is an effective paradigm for improving the inference efficiency of deep networks. By constructing classifiers with varying resource demands (the exits), such networks allow easy samples to be output at early exits, removing the need for executing deeper layers. While existing works mainly focus on the architectural design of multi-exit networks, the training strategies for such models are largely left unexplored. The current state-of-the-art models treat all samples the same during training. However, the early-exiting behavior during testing has been ignored, leading to a gap between training and testing. In this paper, we propose to bridge this gap by sample weighting. Intuitively, easy samples, which generally exit early in the network during inference, should contribute more to training early classifiers. The training of hard samples (mostly exit from deeper layers), however, should be emphasized by the late classifiers. Our work proposes to adopt a weight prediction network to weight the loss of different training samples at each exit. This weight prediction network and the backbone model are jointly optimized under a meta-learning framework with a novel optimization objective. By bringing the adaptive behavior during inference into the training phase, we show that the proposed weighting mechanism consistently improves the trade-off between classification accuracy and inference efficiency. Code is available at https://github.com/LeapLabTHU/L2W-DEN.
引用
收藏
页码:362 / 378
页数:17
相关论文
共 51 条
[1]  
[Anonymous], 2009, CIFAR-100 Dataset
[2]  
[Anonymous], 2015, P 3 INT C LEARN REPR
[3]  
Bejnordi Babak Ehteshami, 2019, ARXIV190706627
[4]  
Bolukbasi T., 2017, ICML
[5]   SMOTE: Synthetic minority over-sampling technique [J].
Chawla, Nitesh V. ;
Bowyer, Kevin W. ;
Hall, Lawrence O. ;
Kegelmeyer, W. Philip .
2002, American Association for Artificial Intelligence (16)
[6]   Class-Balanced Loss Based on Effective Number of Samples [J].
Cui, Yin ;
Jia, Menglin ;
Lin, Tsung-Yi ;
Song, Yang ;
Belongie, Serge .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :9260-9269
[7]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[8]  
Dong XY, 2019, ADV NEUR IN, V32
[9]  
Dosovitskiy A, 2021, ICLR
[10]  
Finn C, 2017, PR MACH LEARN RES, V70