MEAL: Multi-Model Ensemble via Adversarial Learning

被引:101
作者
Shen, Zhiqiang [1 ,2 ]
He, Zhankui [3 ,4 ]
Xue, Xiangyang [1 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[2] Univ Illinois, Beckman Inst, Champaign, IL 61820 USA
[3] Fudan Univ, Sch Data Sci, Shanghai, Peoples R China
[4] Univ Illinois, Champaign, IL USA
来源
THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2019年
基金
国家重点研发计划;
关键词
D O I
10.1609/aaai.v33i01.33014886
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Often the best performing deep neural models are ensembles of multiple base-level networks. Unfortunately, the space required to store these many networks, and the time required to execute them at test-time, prohibits their use in applications where test sets are large (e.g., ImageNet). In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN. In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously. The proposed ensemble method (MEAL) of transferring distilled knowledge with adversarial learning exhibits three important advantages: (1) the student network that learns the distilled knowledge with discriminators is optimized better than the original model; (2) fast inference is realized by a single forward pass, while the performance is even better than traditional ensembles from multi-original models; (3) the student network can learn the distilled knowledge from a teacher model that has arbitrary structures. Extensive experiments on CIFAR-10/100, SVHN and ImageNet datasets demonstrate the effectiveness of our MEAL method. On ImageNet, our ResNet-50 based MEAL achieves top-1/5 21.79%/5.99% val error, which outperforms the original model by 2.06%/1.14%.
引用
收藏
页码:4886 / 4893
页数:8
相关论文
共 39 条
[1]  
[Anonymous], 2015, AISTATS
[2]  
[Anonymous], 2016, ICLR
[3]  
[Anonymous], ECCV
[4]  
[Anonymous], 1995, How We Learn
[5]  
How We Remember: Toward An Understanding Of Brain And Neural Systems: Selected Papers of Leon N Cooper, DOI [10.1142/9789812795885_0025, DOI 10.1142/9789812795885_0025]
[6]  
[Anonymous], NIPS
[7]  
[Anonymous], 2018, IEEE C COMP VIS PATT
[8]  
[Anonymous], 2009, PROC IEEE C COMPUT V
[9]  
[Anonymous], 2015, ICLR
[10]  
[Anonymous], 2017, CVPR