Network-aware federated neural architecture search

被引:0
作者
Ocal, Goktug [1 ]
Ozgovde, Atay [1 ]
机构
[1] Bogazici Univ, TR-34342 Istanbul, Turkiye
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2025年 / 162卷
关键词
Neural architecture search; Federated learning; Network pruning; Client selection; Client grouping; Network emulation;
D O I
10.1016/j.future.2024.07.053
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The cooperation between Deep Learning (DL) and edge devices has further advanced technological developments, allowing smart devices to serve as both data sources and endpoints for DL-powered applications. However, the success of DL relies on optimal Deep Neural Network (DNN) architectures, and manually developing such systems requires extensive expertise and time. Neural Architecture Search (NAS) has emerged to automate the search for the best-performing neural architectures. Meanwhile, Federated Learning (FL) addresses data privacy concerns by enabling collaborative model development without exchanging the private data of clients. In a FL system, network limitations can lead to biased model training, slower convergence, and increased communication overhead. On the other hand, traditional DNN architecture design, emphasizing validation accuracy, often overlooks computational efficiency and size constraints of edge devices. This research aims to develop a comprehensive framework that effectively balances trade-offs between model performance, communication efficiency, and the incorporation of FL into an iterative NAS algorithm. This framework aims to overcome challenges by addressing the specific requirements of FL, optimizing DNNs through NAS, and ensuring computational efficiency while considering the network constraints of edge devices. To address these challenges, we introduce Network-Aware Federated Neural Architecture Search (NAFNAS), an open-source federated neural network pruning framework with network emulation support. Through comprehensive testing, we demonstrate the feasibility of our approach, efficiently reducing DNN size and mitigating communication challenges. Additionally, we propose Network and Distribution Aware Client Grouping (NetDAG), a novel client grouping algorithm tailored for FL with diverse DNN architectures, considerably enhancing efficiency of communication rounds and update balance.
引用
收藏
页数:15
相关论文
共 87 条
[21]   VecQ: Minimal Loss DNN Model Compression With Vectorized Weight Quantization [J].
Gong, Cheng ;
Chen, Yao ;
Lu, Ye ;
Li, Tao ;
Hao, Cong ;
Chen, Deming .
IEEE TRANSACTIONS ON COMPUTERS, 2021, 70 (05) :696-710
[22]  
Han S, 2015, ADV NEUR IN, V28
[23]  
Hard A, 2019, Arxiv, DOI [arXiv:1811.03604, DOI 10.48550/ARXIV.1811.03604]
[24]  
He C., 2021, FedNAS: Federated deep learning via neural architecture search
[25]  
Hsieh K, 2020, PR MACH LEARN RES, V119
[26]   Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training With Non-IID Private Data [J].
Itahara, Sohei ;
Nishio, Takayuki ;
Koda, Yusuke ;
Morikura, Masahiro ;
Yamamoto, Koji .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (01) :191-205
[27]   Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference [J].
Jacob, Benoit ;
Kligys, Skirmantas ;
Chen, Bo ;
Zhu, Menglong ;
Tang, Matthew ;
Howard, Andrew ;
Adam, Hartwig ;
Kalenichenko, Dmitry .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :2704-2713
[28]  
Jeong E, 2023, Arxiv, DOI arXiv:1811.11479
[29]   Federated Learning Algorithm Based on Knowledge Distillation [J].
Jiang, Donglin ;
Shan, Chen ;
Zhang, Zhihui .
2020 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND COMPUTER ENGINEERING (ICAICE 2020), 2020, :163-167
[30]   PDAS: Improving network pruning based on Progressive Differentiable Architecture Search for DNNs [J].
Jiang, Wenbin ;
Chen, Yuhao ;
Wen, Suyang ;
Zheng, Long ;
Jin, Hai .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 146 :98-113