Communication-Efficient Distributed Learning: An Overview

被引:37
作者
Cao, Xuanyu [1 ]
Basar, Tamer [2 ]
Diggavi, Suhas [3 ]
Eldar, Yonina C. [4 ]
Letaief, Khaled B. [1 ]
Poor, H. Vincent [5 ]
Zhang, Junshan [6 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Kowloon, Clear Water Bay, Hong Kong, Peoples R China
[2] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61801 USA
[3] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA 90095 USA
[4] Weizmann Inst Sci, Dept Math & Comp Sci, IS-7610001 Rehovot, Israel
[5] Princeton Univ, Dept Elect & Comp Engn, Princeton, NJ 08544 USA
[6] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Distance learning; Computer aided instruction; Servers; Distributed databases; Sensors; Resource management; Training; Distributed learning; communication efficiency; event-triggering; quantization; compression; sparsification; resource allocation; incentive mechanisms; single-task learning; multitask learning; meta-learning; online learning; ALTERNATING DIRECTION METHOD; ONLINE CONVEX-OPTIMIZATION; LINEAR CONVERGENCE; MULTIAGENT NETWORKS; SUBGRADIENT METHODS; TIME OPTIMIZATION; CONSENSUS; DESIGN; QUANTIZATION; ALGORITHMS;
D O I
10.1109/JSAC.2023.3242710
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Distributed learning is envisioned as the bedrock of next-generation intelligent networks, where intelligent agents, such as mobile devices, robots, and sensors, exchange information with each other or a parameter server to train machine learning models collaboratively without uploading raw data to a central entity for centralized processing. By utilizing the computation/communication capability of individual agents, the distributed learning paradigm can mitigate the burden at central processors and help preserve data privacy of users. Despite its promising applications, a downside of distributed learning is its need for iterative information exchange over wireless channels, which may lead to high communication overhead unaffordable in many practical systems with limited radio resources such as energy and bandwidth. To overcome this communication bottleneck, there is an urgent need for the development of communication-efficient distributed learning algorithms capable of reducing the communication cost and achieving satisfactory learning/optimization performance simultaneously. In this paper, we present a comprehensive survey of prevailing methodologies for communication-efficient distributed learning, including reduction of the number of communications, compression and quantization of the exchanged information, radio resource management for efficient learning, and game-theoretic mechanisms incentivizing user participation. We also point out potential directions for future research to further enhance the communication efficiency of distributed learning in various scenarios.
引用
收藏
页码:851 / 873
页数:23
相关论文
共 183 条
  • [1] Abad MSH, 2020, INT CONF ACOUST SPEE, P8866, DOI [10.1109/icassp40776.2020.9054634, 10.1109/ICASSP40776.2020.9054634]
  • [2] Agarwal A., 2009, Adv. Neural Inf. Process. Syst., V22
  • [3] Agarwal N, 2018, ADV NEUR IN, V31
  • [4] Aji A. F., 2017, PROC C EMPIRICAL MET
  • [5] Distributed Online Convex Optimization on Time-Varying Directed Graphs
    Akbari, Mohammad
    Gharesifard, Bahman
    Linder, Tamas
    [J]. IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2017, 4 (03): : 417 - 428
  • [6] Alistarh D, 2018, ADV NEUR IN, V31
  • [7] Alistarh D, 2017, ADV NEUR IN, V30
  • [8] Federated Learning Over Wireless Fading Channels
    Amiri, Mohammad Mohammadi
    Gunduz, Deniz
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (05) : 3546 - 3557
  • [9] Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air
    Amiri, Mohammad Mohammadi
    Gunduz, Deniz
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 2155 - 2169
  • [10] Arjevani Y, 2015, ADV NEUR IN, V28