Accelerating Federated Learning for IoT in Big Data Analytics With Pruning, Quantization and Selective Updating

被引:53
作者
Xu, Wenyuan [1 ]
Fang, Weiwei [1 ,2 ]
Ding, Yi [3 ]
Zou, Meixia [1 ]
Xiong, Naixue [4 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
[2] Minist Educ, Key Lab Ind Internet Things & Networked Control, Chongqing 400065, Peoples R China
[3] Beijing Wuzi Univ, Sch Informat, Beijing 101149, Peoples R China
[4] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300350, Peoples R China
关键词
Training; Computational modeling; Data models; Servers; Quantization (signal); Collaborative work; Big Data; Federated learning; Internet of Things; big data; model compression; network pruning;
D O I
10.1109/ACCESS.2021.3063291
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The ever-increasing number of Internet of Things (IoT) devices are continuously generating huge masses of data, but the current cloud-centric approach for IoT big data analysis has raised public concerns on both data privacy and network cost. Federated learning (FL) recently emerges as a promising technique to accommodate these concerns, by means of learning a global model by aggregating local updates from multiple devices without sharing the privacy-sensitive data. However, IoT devices usually have constrained computation resources and poor network connections, making it infeasible or very slow to train deep neural networks (DNNs) by following the FL pattern. To address this problem, we propose a new efficient FL framework called FL-PQSU in this paper. It is composed of 3-stage pipeline: structured pruning, weight quantization and selective updating, that work together to reduce the costs of computation, storage, and communication to accelerate the FL training process. We study FL-PQSU using popular DNN models (AlexNet, VGG16) and publicly available datasets (MNIST, CIFAR10), and demonstrate that it can well control the training overhead while still guaranteeing the learning performance.
引用
收藏
页码:38457 / 38466
页数:10
相关论文
共 25 条
[11]  
Jiang Y., 2019, ARXIV190912326
[12]  
Koneany J., 2016, ARXIV161005492
[13]  
Li F., 2016, ARXIV
[14]  
Li H., 2017, P INT C LEARNING REP
[15]   Federated Learning in Mobile Edge Networks: A Comprehensive Survey [J].
Lim, Wei Yang Bryan ;
Nguyen Cong Luong ;
Dinh Thai Hoang ;
Jiao, Yutao ;
Liang, Ying-Chang ;
Yang, Qiang ;
Niyato, Dusit ;
Miao, Chunyan .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2020, 22 (03) :2031-2063
[16]   Learning Efficient Convolutional Networks through Network Slimming [J].
Liu, Zhuang ;
Li, Jianguo ;
Shen, Zhiqiang ;
Huang, Gao ;
Yan, Shoumeng ;
Zhang, Changshui .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2755-2763
[17]   PruneTrain Fast Neural Network Training by Dynamic Sparse Model Reconfiguration [J].
Lym, Sangkug ;
Choukse, Esha ;
Zangeneh, Siavash ;
Wen, Wei ;
Sanghavi, Sujay ;
Erez, Mattan .
PROCEEDINGS OF SC19: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2019,
[18]  
Ma X., 2019, ARXIV190702124
[19]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[20]   Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT [J].
Mills, Jed ;
Hu, Jia ;
Min, Geyong .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07) :5986-5994