Towards Robust Compressed Convolutional Neural Networks

被引:10
作者
Wijayanto, Arie Wahyu [1 ]
Choong, Jun Jin [1 ]
Madhawa, Kaushalya [1 ]
Murata, Tsuyoshi [1 ]
机构
[1] Tokyo Inst Technol, Dept Comp Sci, Tokyo, Japan
来源
2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP) | 2019年
关键词
deep learning; compression; robustness;
D O I
10.1109/bigcomp.2019.8679132
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recent studies on robustness of Convolutional Neural Network (CNN) shows that CNNs are highly vulnerable towards adversarial attacks. Meanwhile, smaller sized CNN models with no significant accuracy loss are being introduced to mobile devices. However, only the accuracy on standard datasets is reported along with such research. The wide deployment of smaller models on millions of mobile devices stresses importance of their robustness. In this research, we study how robust such models are with respect to state-of-the-art compression techniques such as quantization. Our contributions include: (1) insights to achieve smaller models and robust models (2) a compression framework which is adversarial-aware. In the former, we discovered that compressed models are naturally more robust than compact models. This provides an incentive to perform compression rather than designing compact models. Additionally, the latter provides benefits of increased accuracy and higher compression rate, up to 90x.
引用
收藏
页码:168 / 175
页数:8
相关论文
共 33 条
[1]  
[Anonymous], 2016, SQUEEZENET ALEXNET L
[2]  
[Anonymous], 2017, J MACH LEARN RES
[3]  
[Anonymous], 2017, CoRR
[4]  
[Anonymous], 2017, PROC NEURIPS MACH LE
[5]  
[Anonymous], 2015, Nature, DOI [10.1038/nature14539, DOI 10.1038/NATURE14539]
[6]  
[Anonymous], 2017, ARXIV170404861
[7]  
[Anonymous], 2017, COMMUN ACM, DOI DOI 10.1145/3065386
[8]  
[Anonymous], 2011, PROC DEEP LEARN UNS
[9]  
[Anonymous], 2017, ICLR
[10]  
[Anonymous], 2015, INT C NEUR INF PROC