Confederated Learning: Federated Learning With Decentralized Edge Servers

被引:18
作者
Wang, Bin [1 ,2 ]
Fang, Jun [1 ,2 ]
Li, Hongbin [3 ]
Yuan, Xiaojun [1 ,2 ]
Ling, Qing [4 ,5 ,6 ]
机构
[1] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Huzhou, Huzhou 313001, Peoples R China
[2] Univ Elect Sci & Technol China, Natl Key Lab Sci & Technol Commun, Chengdu 611731, Peoples R China
[3] Stevens Inst Technol, Dept Elect & Comp Engn, Hoboken, NJ 07030 USA
[4] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China
[5] Sun Yat Sen Univ, Guangdong Prov Key Lab Computat Sci, Guangzhou 510006, Guangdong, Peoples R China
[6] Pazhou Lab, Guangzhou 510300, Guangdong, Peoples R China
基金
美国国家科学基金会;
关键词
Servers; Training; Peer-to-peer computing; Signal processing algorithms; Scalability; Data models; Computational modeling; Confederated learning; ADMM; random scheduling; CONVERGENCE;
D O I
10.1109/TSP.2023.3241768
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) is an emerging machine learning paradigm that allows to accomplish model training without aggregating data at a central server. Most studies on FL consider a centralized framework, in which a single server is endowed with a central authority to coordinate a number of devices to perform model training in an iterative manner. Due to stringent communication and bandwidth constraints, such a centralized framework has limited scalability as the number of devices grows. To address this issue, in this paper, we propose a ConFederated Learning (CFL) framework. The proposed CFL consists of multiple servers, in which each server is connected with an individual set of devices as in the conventional FL framework, and decentralized collaboration is leveraged among servers to make full use of the data dispersed throughout the network. We develop a stochastic alternating direction method of multipliers (ADMM) algorithm for CFL. The proposed algorithm employs a random scheduling policy which randomly selects a subset of devices to access their respective servers at each iteration, thus alleviating the need of uploading a huge amount of information from devices to servers. Theoretical analysis is presented to justify the proposed method. Numerical results show that the proposed method can converge to a decent solution significantly faster than gradient-based FL algorithms, thus boasting a substantial advantage in terms of communication efficiency.
引用
收藏
页码:248 / 263
页数:16
相关论文
共 41 条
[31]  
Wang J., 2021, Cooperative SGD: a unified framework for the design and analysis of communication-efficient SGD algorithms, V22, P1
[32]  
Wang JY, 2022, AAAI CONF ARTIF INTE, P8548
[33]   Swarm Learning for decentralized and confidential clinical machine learning [J].
Warnat-Herresthal, Stefanie ;
Schultze, Hartmut ;
Shastry, Krishnaprasad Lingadahalli ;
Manamohan, Sathyanarayanan ;
Mukherjee, Saikat ;
Garg, Vishesh ;
Sarveswara, Ravi ;
Haendler, Kristian ;
Pickkers, Peter ;
Aziz, N. Ahmad ;
Ktena, Sofia ;
Tran, Florian ;
Bitzer, Michael ;
Ossowski, Stephan ;
Casadei, Nicolas ;
Herr, Christian ;
Petersheim, Daniel ;
Behrends, Uta ;
Kern, Fabian ;
Fehlmann, Tobias ;
Schommers, Philipp ;
Lehmann, Clara ;
Augustin, Max ;
Rybniker, Jan ;
Altmueller, Janine ;
Mishra, Neha ;
Bernardes, Joana P. ;
Kraemer, Benjamin ;
Bonaguro, Lorenzo ;
Schulte-Schrepping, Jonas ;
De Domenico, Elena ;
Siever, Christian ;
Kraut, Michael ;
Desai, Milind ;
Monnet, Bruno ;
Saridaki, Maria ;
Siegel, Charles Martin ;
Drews, Anna ;
Nuesch-Germano, Melanie ;
Theis, Heidi ;
Heyckendorf, Jan ;
Schreiber, Stefan ;
Kim-Hellmuth, Sarah ;
Nattermann, Jacob ;
Skowasch, Dirk ;
Kurth, Ingo ;
Keller, Andreas ;
Bals, Robert ;
Nuernberg, Peter ;
Riess, Olaf .
NATURE, 2021, 594 (7862) :265-+
[34]   Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence [J].
Xin, Ran ;
Khan, Usman A. ;
Kar, Soummya .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 :6255-6271
[35]   Decentralized Federated Learning via SGD over Wireless D2D Networks [J].
Xing, Hong ;
Simeone, Osvaldo ;
Bi, Suzhi .
PROCEEDINGS OF THE 21ST IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (IEEE SPAWC2020), 2020,
[36]   Scheduling Policies for Federated Learning in Wireless Networks [J].
Yang, Howard H. ;
Liu, Zuozhu ;
Quek, Tony Q. S. ;
Poor, H. Vincent .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2020, 68 (01) :317-333
[37]   Federated Learning via Over-the-Air Computation [J].
Yang, Kai ;
Jiang, Tao ;
Shi, Yuanming ;
Ding, Zhi .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (03) :2022-2035
[38]   Decentralized Federated Learning With Unreliable Communications [J].
Ye, Hao ;
Liang, Le ;
Li, Geoffrey Ye .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (03) :487-500
[39]  
Yuan H., 2020, Advances in Neural Information Processing Systems, V33, P5332
[40]   FedPD: A Federated Learning Framework With Adaptivity to Non-IID Data [J].
Zhang, Xinwei ;
Hong, Mingyi ;
Dhople, Sairaj ;
Yin, Wotao ;
Liu, Yang .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 (69) :6055-6070