iSample: Intelligent Client Sampling in Federated Learning

被引:11
作者
Imani, HamidReza [1 ]
Anderson, Jeff [1 ]
El-Ghazawi, Tarek [1 ]
机构
[1] George Washington Univ, Dept Elect & Comp Engn, Washington, DC 20052 USA
来源
6TH IEEE INTERNATIONAL CONFERENCE ON FOG AND EDGE COMPUTING (ICFEC 2022) | 2022年
基金
美国国家科学基金会;
关键词
federated learning; heterogeneous systems; resource constrained devices; edge computing; machine learning;
D O I
10.1109/ICFEC54809.2022.00015
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The pervasiveness of AI in society has made machine learning (ML) an invaluable tool for mobile and internet-of-things (IoT) devices. While the aggregate amount of data yielded by those devices is sufficient for training an accurate model, the data available to any one device is limited. Therefore, augmenting the learning at any of the devices with the experience from observations associated with the rest of the devices will be necessary. This, however, can dramatically increase the bandwidth requirements. Prior work has led to the development of Federated Learning (FL), where instead of exchanging data, client devices can only share weights to learn from one another. However, heterogeneity in device resource availability and network conditions still impose limitations on training performance. In order to improve performance while maintaining good levels of accuracy, we introduce iSample. iSample, an intelligent sampling technique, selects clients by jointly considering known network performance and model quality parameters, allowing the minimization of training time. We compare iSample with other federated learning approaches and show that iSample improves the performance of the global model, especially in the earlier stages of training, while decreasing the training time for both CNN and VGG by 27% and 39%, respectively.
引用
收藏
页码:58 / 65
页数:8
相关论文
共 50 条
  • [21] A High-Performance Federated Learning Aggregation Algorithm Based on Learning Rate Adjustment and Client Sampling
    Gao, Yulian
    Lu, Gehao
    Gao, Jimei
    Li, Jinggang
    MATHEMATICS, 2023, 11 (20)
  • [22] EFFICIENT CLIENT CONTRIBUTION EVALUATION FOR HORIZONTAL FEDERATED LEARNING
    Zhao, Jie
    Zhu, Xinghua
    Wang, Jianzong
    Xiao, Jing
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3060 - 3064
  • [23] Federated Learning with Client Availability Budgets
    Bao, Yunkai
    Drew, Steve
    Wang, Xin
    Zhou, Jiayu
    Niu, Xiaoguang
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 1902 - 1907
  • [24] Reuse of Client Models in Federated Learning
    Cao, Bokai
    Wu, Weigang
    Zhan, Congcong
    Zhou, Jieying
    2022 IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING (SMARTCOMP 2022), 2022, : 356 - 361
  • [25] FedACS: an Efficient Federated Learning Method Among Multiple Medical Institutions with Adaptive Client Sampling
    Gu, Yunchao
    Hu, Quanquan
    Wang, Xinliang
    Zhou, Zhong
    Lu, Sixu
    2021 14TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2021), 2021,
  • [26] FedDCS: Federated Learning Framework based on Dynamic Client Selection
    Zou, Shutong
    Xiao, Mingjun
    Xu, Yin
    An, Baoyi
    Zheng, Jun
    2021 IEEE 18TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2021), 2021, : 627 - 632
  • [27] A review on client-server attacks and defenses in federated learning
    Sharma, Anee
    Marchang, Ningrinla
    COMPUTERS & SECURITY, 2024, 140
  • [28] Towards Federated Learning with Byzantine-Robust Client Weighting
    Portnoy, Amit
    Tirosh, Yoav
    Hendler, Danny
    APPLIED SCIENCES-BASEL, 2022, 12 (17):
  • [29] GA Approach to Optimize Training Client Set in Federated Learning
    Kang, Dongseok
    Ahn, Chang Wook
    IEEE ACCESS, 2023, 11 : 85489 - 85500
  • [30] Trust-Augmented Deep Reinforcement Learning for Federated Learning Client Selection
    Rjoub, Gaith
    Wahab, Omar Abdel
    Bentahar, Jamal
    Cohen, Robin
    Bataineh, Ahmed Saleh
    INFORMATION SYSTEMS FRONTIERS, 2024, 26 (04) : 1261 - 1278