RLFL: A Reinforcement Learning Aggregation Approach for Hybrid Federated Learning Systems Using Full and Ternary Precision

被引:1
作者
Imani, Hamidreza [1 ]
Anderson, Jeff [1 ]
Farid, Samuel [1 ]
Amirany, Abdolah [1 ]
El-Ghazawi, Tarek [1 ]
机构
[1] George Washington Univ, Dept Elect & Comp Engn, Washington, DC 20052 USA
关键词
Training; Computational modeling; Data models; Accuracy; Decision making; Servers; Training data; Quantization (signal); Adaptation models; Reinforcement learning; Federated learning; reinforcement learning; ternary model; heterogeneous environment; MEMORY;
D O I
10.1109/JETCAS.2024.3483554
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated Learning (FL) has emerged as an approach to provide a privacy-preserving and communication-efficient Machine Learning (ML) framework in mobile-edge environments which are likely to be resource-constrained and heterogeneous. Therefore, the required precision level and performance from each of the devices may vary depending upon the circumstances, giving rise to designs containing mixed-precision and quantized models. Among the various quantization schemes, binary and ternary representations are significant since they enable arrangements that can strike effective balances between performance and precision. In this paper, we propose RLFL, a hybrid ternary/full-precision FL system along with a Reinforcement Learning (RL) aggregation method with the goal of improved performance comparing to a homogeneous ternary environment. This system consists a mix of clients with full-precision and resource-constrained clients with ternary ML models. However, aggregating models with ternary and full-precision weights using traditional aggregation approaches present a challenge due to the disparity in weight magnitudes. In order to obtain an improved accuracy, we use a deep RL model to explore and optimize the amount of contribution assigned to each client's model for aggregation in each iteration. We evaluate and compare accuracy and communication overhead of the proposed approach against the prior work for the classification of MNIST, FMNIST, and CIFAR10 datasets. Evaluation results show that the proposed RLFL system, along with its aggregation technique, outperforms the existing FL approaches in accuracy ranging from 5% to 19% while imposing negligible computation overhead.
引用
收藏
页码:673 / 687
页数:15
相关论文
共 51 条
  • [1] Alistarh D, 2017, ADV NEUR IN, V30
  • [2] Amirany Abdolah, 2023, 2023 31st International Conference on Electrical Engineering (ICEE), P426, DOI 10.1109/ICEE59167.2023.10334810
  • [3] Toward Energy-Quality Scaling in Deep Neural Networks
    Anderson, Jeff
    Alkabani, Yousra
    El-Ghazawi, Tarek
    [J]. IEEE DESIGN & TEST, 2021, 38 (04) : 27 - 36
  • [4] Energy Consumption Analysis of Instruction Cache Prefetching Methods
    Baradaran, Morteza
    Ansari, Ali
    Sadrosadati, Mohammad
    Sarbazi-Azad, Hamid
    [J]. 2023 INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING WORKSHOPS, SBAC-PADW, 2023, : 60 - 67
  • [5] Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications
    Cai, Han
    Lin, Ji
    Lin, Yujun
    Liu, Zhijian
    Tang, Haotian
    Wang, Hanrui
    Zhu, Ligeng
    Han, Song
    [J]. ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2022, 27 (03)
  • [6] Caldas S., 2018, arXiv
  • [7] FATNN: Fast and Accurate Ternary Neural Networks
    Chen, Peng
    Zhuang, Bohan
    Shen, Chunhua
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 5199 - 5208
  • [8] Chen WL, 2021, Arxiv, DOI arXiv:2010.13723
  • [9] Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation
    Chen, Yang
    Sun, Xiaoyan
    Jin, Yaochu
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 4229 - 4238
  • [10] GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework
    Deng, Lei
    Jiao, Peng
    Pei, Jing
    Wu, Zhenzhi
    Li, Guoqi
    [J]. NEURAL NETWORKS, 2018, 100 : 49 - 58