Optimized Quantization for Convolutional Deep Neural Networks in Federated Learning

被引:0
作者
Kim, You Jun [1 ]
Hong, Choong Seon [1 ]
机构
[1] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, Gyeonggi Do, South Korea
来源
APNOMS 2020: 2020 21ST ASIA-PACIFIC NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM (APNOMS) | 2020年
关键词
federated learning; OQFL; FPROPS; quantization;
D O I
10.23919/apnoms50412.2020.9236949
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can't collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floating-point operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.
引用
收藏
页码:150 / 154
页数:5
相关论文
共 12 条
[1]  
[Anonymous], 2014, Comput. Sci.
[2]  
[Anonymous], 2016, CORR
[3]  
Bengio Y., 2013, CoRR abs/1308.3432
[4]  
Courbariaux M, 2015, ADV NEUR IN, V28
[5]  
Esser Steven K, 2019, INT C LEARN REPR
[6]   Identity Mappings in Deep Residual Networks [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 :630-645
[7]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[8]  
Hubara I, 2016, ADV NEUR IN, V29
[9]   Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss [J].
Jung, Sangil ;
Son, Changyong ;
Lee, Seohyung ;
Son, Jinwoo ;
Han, Jae-Joon ;
Kwak, Youngjun ;
Hwang, Sung Ju ;
Choi, Changkyu .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4345-4354
[10]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273