FedTrip: A Resource-Efficient Federated Learning Method with Triplet Regularization

被引:1
作者
Li, Xujing [1 ,2 ]
Liu, Min [1 ,2 ,3 ]
Sun, Sheng [1 ]
Wang, Yuwei [1 ]
Jiang, Hui [1 ,2 ]
Jiang, Xuefeng [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Zhongguancun Lab, Beijing, Peoples R China
来源
2023 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, IPDPS | 2023年
基金
中国国家自然科学基金;
关键词
Federated Learning; Data Heterogeneity; Resource Efficiency;
D O I
10.1109/IPDPS54959.2023.00086
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In the federated learning scenario, geographically distributed clients collaboratively train a global model. Data heterogeneity among clients significantly results in inconsistent model updates, which evidently slow down model convergence. To alleviate this issue, many methods employ regularization terms to narrow the discrepancy between client-side local models and the server-side global model. However, these methods impose limitations on the ability to explore superior local models and ignore the valuable information in historical models. Besides, although the up-to-date representation method simultaneously concerns the global and historical local models, it suffers from unbearable computation cost. To accelerate convergence with low resource consumption, we innovatively propose a model regularization method named FedTrip, which is designed to restrict global-local divergence and decrease current-historical correlation for alleviating the negative effects derived from data heterogeneity. FedTrip helps the current local model to be close to the global model while keeping away from historical local models, which contributes to guaranteeing the consistency of local updates among clients and efficiently exploring superior local models with negligible additional computation cost on attaching operations. Empirically, we demonstrate the superiority of FedTrip via extensive evaluations. To achieve the target accuracy, FedTrip outperforms the state-of-the-art baselines in terms of significantly reducing the total overhead of client-server communication and local computation.
引用
收藏
页码:809 / 819
页数:11
相关论文
共 46 条
  • [1] Acar Durmus Alp Emre, 2021, ICLR
  • [2] Bonawitz K, 2019, PROC MACH LEARN SYST, V1, P374
  • [3] Caldas S., 2018, arXiv
  • [4] Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
  • [5] Dean J., 2015, ARXIV PREPRINT ARXIV
  • [6] Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
  • [7] Hadsell R., 2006, PROC IEEE C COMP UIS, P1735
  • [8] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [9] Hsieh K, 2020, PR MACH LEARN RES, V119
  • [10] Kairouz P., 2019, arXiv