A Layer Selection Optimizer for Communication-Efficient Decentralized Federated Deep Learning

被引:15
作者
Barbieri, Luca [1 ]
Savazzi, Stefano [2 ]
Nicoli, Monica [3 ]
机构
[1] Politecn Milan, Dipartimento Elettron Informaz & Bioingn, I-20133 Milan, Italy
[2] CNR, Inst Elect Informat & Telecommun Engn IEIIT, I-20133 Milan, Italy
[3] Politecn Milan, Dipartimento Ingn Gestionale, I-20156 Milan, Italy
关键词
Federated learning; Artificial neural networks; Optimization; Deep learning; Computational modeling; Wireless sensor networks; 5G mobile communication; Machine learning over networks; federated learning; consensus; sidelink communications; beyond; 5G; 6G WIRELESS SYSTEMS; DECISION-MAKING; CONSENSUS; CHALLENGES; ALGORITHMS; NETWORKS; VISION; AGENTS;
D O I
10.1109/ACCESS.2023.3251571
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) systems orchestrate the cooperative training of a shared Machine Learning (ML) model across connected devices. Recently, decentralized FL architectures driven by consensus have been proposed to enable the devices to share and aggregate the ML model parameters via direct sidelink communications. The approach has the advantage of promoting the federation among the agents even in the absence of a server, but may require an intensive use of communication resources compared to vanilla FL methods. This paper proposes a communication-efficient design of consensus-driven FL optimized for training of Deep Neural Networks (DNNs). Devices independently select fragments of the DNN to be shared with neighbors on each training round. Selection is based on a local optimizer that trades model quality improvement with sidelink communication resource savings. The proposed technique is validated on a vehicular cooperative sensing use case characterized by challenging real-world datasets and complex DNNs typically employed in autonomous driving with up to 40 trainable layers. The impact of layer selection is analyzed under different distributed coordination configurations. The results show that it is better to prioritize the DNN layers possessing few parameters, while the selection policy should optimally balance gradient sorting and randomization. Latency, accuracy and communication tradeoffs are analyzed in detail targeting sustainable federation policies.
引用
收藏
页码:22155 / 22173
页数:19
相关论文
共 55 条
[1]  
3rd Generation Partnership Project, 2021, 23287 3GPP TS
[2]  
5G-ACIA, 5G ALL CONN IND AUT
[3]  
Alistarh D, 2017, ADV NEUR IN, V30
[4]  
[Anonymous], 2012, INADVANCESINNEURALIN
[5]   Decentralized federated learning for extended sensing in 6G connected vehicles [J].
Barbieri, Luca ;
Savazzi, Stefano ;
Brambilla, Mattia ;
Nicoli, Monica .
VEHICULAR COMMUNICATIONS, 2022, 33
[6]   Decentralized Federated Learning for Road User Classification in Enhanced V2X Networks [J].
Barbieri, Luca ;
Savazzi, Stefano ;
Nicoli, Monica .
2021 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2021,
[7]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[8]   Distributed Learning in Wireless Networks: Recent Progress and Future Challenges [J].
Chen, Mingzhe ;
Gunduz, Deniz ;
Huang, Kaibin ;
Saad, Walid ;
Bennis, Mehdi ;
Feljan, Aneta Vulgarakis ;
Poor, H. Vincent .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (12) :3579-3605
[9]   A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks [J].
Chen, Mingzhe ;
Yang, Zhaohui ;
Saad, Walid ;
Yin, Changchuan ;
Poor, H. Vincent ;
Cui, Shuguang .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (01) :269-283
[10]  
Deng Li., 2012, Signal Processing Magazine, IEEE, V29, P141, DOI DOI 10.1109/MSP.2012.2211477