Joint Optimal Quantization and Aggregation of Federated Learning Scheme in VANETs

被引:19
作者
Li, Yifei [1 ]
Guo, Yijia [2 ]
Alazab, Mamoun [3 ]
Chen, Shengbo [1 ]
Shen, Cong [4 ]
Yu, Keping [5 ]
机构
[1] Henan Univ, Sch Comp & Informat Engn, Kaifeng 475001, Peoples R China
[2] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100190, Peoples R China
[3] Charles Darwin Univ, Coll Engn IT & Environm, Casuarina, NT 0810, Australia
[4] Univ Virginia, Charles L Brown Dept Elect & Comp Engn, Charlottesville, VA 22904 USA
[5] Waseda Univ, Global Informat & Telecommun Inst, Shinjuku Ku, Tokyo 1698050, Japan
基金
中国国家自然科学基金; 日本学术振兴会;
关键词
Quantization (signal); Servers; Collaborative work; Optimization; Data models; Computational modeling; Standards; Artificial intelligence; vehicular ad hoc networks; federated learning; quantization; VEHICLES;
D O I
10.1109/TITS.2022.3145823
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Vehicular ad hoc networks (VANETs) is one of the most promising approaches for the Intelligent Transportation Systems (ITS). With the rapid increase in the amount of traffic data, deep learning based algorithms have been used extensively in VANETs. The recently proposed federated learning is an attractive candidate for collaborative machine learning where instead of transferring a plethora of data to a centralized server, all clients train their respective local models and upload them to the server for model aggregation. Model quantization is an effective approach to address the communication efficiency issue in federated learning, and yet existing studies largely assume homogeneous quantization for all clients. However, in reality, clients are predominantly heterogeneous, where they support different quantization precision levels. In this work, we propose FedDO - Federated Learning with Double Optimization. Minimizing the drift term in the convergence analysis, which is a weighted sum of squared quantization errors (SQE) over all clients, leads to a double optimization at both clients and server sides. In particular, each client adopts a fully distributed, instantaneous (per learning round) and individualized (per client) quantization scheme that minimizes its own squared quantization error, and the server computes the aggregation weights that minimize the weighted sum of squared quantization errors over all clients. We show via numerical experiments that the minimal-SQE quantizer has a better performance than a widely adopted linear quantizer for federated learning. We also demonstrate the performance advantages of FedDO over the vanilla FedAvg with standard equal weights and linear quantization.
引用
收藏
页码:19852 / 19863
页数:12
相关论文
共 42 条
  • [1] Generalizing AI: Challenges and Opportunities for Plug and Play AI Solutions
    Al Ridhawi, Ismaeel
    Otoum, Safa
    Aloqaily, Moayad
    Boukerche, Azzedine
    [J]. IEEE NETWORK, 2021, 35 (01): : 372 - 379
  • [2] Enabling Intelligent IoCV Services at the Edge for 5G Networks and Beyond
    Al Ridhawi, Ismaeel
    Aloqaily, Moayad
    Boukerche, Azzedine
    Jararweh, Yaser
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (08) : 5190 - 5200
  • [3] Alistarh D, 2017, ADV NEUR IN, V30
  • [4] [Anonymous], 2017, ARXIV170701155
  • [5] Boyd S., 2004, Convex Optimization, DOI 10.1017/CBO9780511804441
  • [6] Intelligent resource allocation management for vehicles network: An A3C learning approach
    Chen, Miaojiang
    Wang, Tian
    Ota, Kaoru
    Dong, Mianxiong
    Zhao, Ming
    Liu, Anfeng
    [J]. COMPUTER COMMUNICATIONS, 2020, 151 (151) : 485 - 494
  • [7] Dynamic Aggregation for Heterogeneous Quantization in Federated Learning
    Chen, Shengbo
    Shen, Cong
    Zhang, Lanxue
    Tang, Yuanmin
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (10) : 6804 - 6819
  • [8] da Silva M.V.S., 2020, P AN WORKSH COMP URB, P166
  • [9] Dai, ARXIV191104655
  • [10] Elgabli A, 2020, INT CONF ACOUST SPEE, P8876, DOI [10.1109/ICASSP40776.2020.9054491, 10.1109/icassp40776.2020.9054491]