Learning to Weight Samples for Dynamic Early-Exiting Networks

被引:22
作者
Han, Yizeng [1 ]
Pu, Yifan [1 ]
Lai, Zihang [1 ,2 ]
Wang, Chaofei [1 ]
Song, Shiji [1 ]
Cao, Junfeng [3 ]
Huang, Wenhui [3 ]
Deng, Chao [3 ]
Huang, Gao [1 ]
机构
[1] Tsinghua Univ, Beijing 100084, Peoples R China
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] China Mobile Res Inst, Beijing 100084, Peoples R China
来源
COMPUTER VISION, ECCV 2022, PT XI | 2022年 / 13671卷
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Sample weighting; Dynamic early exiting; Meta-learning;
D O I
10.1007/978-3-031-20083-0_22
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Early exiting is an effective paradigm for improving the inference efficiency of deep networks. By constructing classifiers with varying resource demands (the exits), such networks allow easy samples to be output at early exits, removing the need for executing deeper layers. While existing works mainly focus on the architectural design of multi-exit networks, the training strategies for such models are largely left unexplored. The current state-of-the-art models treat all samples the same during training. However, the early-exiting behavior during testing has been ignored, leading to a gap between training and testing. In this paper, we propose to bridge this gap by sample weighting. Intuitively, easy samples, which generally exit early in the network during inference, should contribute more to training early classifiers. The training of hard samples (mostly exit from deeper layers), however, should be emphasized by the late classifiers. Our work proposes to adopt a weight prediction network to weight the loss of different training samples at each exit. This weight prediction network and the backbone model are jointly optimized under a meta-learning framework with a novel optimization objective. By bringing the adaptive behavior during inference into the training phase, we show that the proposed weighting mechanism consistently improves the trade-off between classification accuracy and inference efficiency. Code is available at https://github.com/LeapLabTHU/L2W-DEN.
引用
收藏
页码:362 / 378
页数:17
相关论文
共 51 条
[11]   A decision-theoretic generalization of on-line learning and an application to boosting [J].
Freund, Y ;
Schapire, RE .
JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 1997, 55 (01) :119-139
[12]  
Howard AG, 2017, Arxiv, DOI [arXiv:1704.04861, DOI 10.48550/ARXIV.1704.04861]
[13]  
Han S., 2016, P INT C LEARN REPR
[14]  
Han Y., 2021, TIP
[15]   Dynamic Neural Networks: A Survey [J].
Han, Yizeng ;
Huang, Gao ;
Song, Shiji ;
Yang, Le ;
Wang, Honghui ;
Wang, Yulin .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) :7436-7456
[16]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[17]   Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration [J].
He, Yang ;
Liu, Ping ;
Wang, Ziwei ;
Hu, Zhilan ;
Yang, Yi .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4335-4344
[18]   Channel Selection Using Gumbel Softmax [J].
Herrmann, Charles ;
Bowen, Richard Strong ;
Zabih, Ramin .
COMPUTER VISION - ECCV 2020, PT XXVII, 2020, 12372 :241-257
[19]   Meta-Learning in Neural Networks: A Survey [J].
Hospedales, Timothy ;
Antoniou, Antreas ;
Micaelli, Paul ;
Storkey, Amos .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) :5149-5169
[20]  
Huang G., 2018, P INT C LEARN REPR I