Network-aware federated neural architecture search

被引:0
作者
Ocal, Goktug [1 ]
Ozgovde, Atay [1 ]
机构
[1] Bogazici Univ, TR-34342 Istanbul, Turkiye
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2025年 / 162卷
关键词
Neural architecture search; Federated learning; Network pruning; Client selection; Client grouping; Network emulation;
D O I
10.1016/j.future.2024.07.053
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The cooperation between Deep Learning (DL) and edge devices has further advanced technological developments, allowing smart devices to serve as both data sources and endpoints for DL-powered applications. However, the success of DL relies on optimal Deep Neural Network (DNN) architectures, and manually developing such systems requires extensive expertise and time. Neural Architecture Search (NAS) has emerged to automate the search for the best-performing neural architectures. Meanwhile, Federated Learning (FL) addresses data privacy concerns by enabling collaborative model development without exchanging the private data of clients. In a FL system, network limitations can lead to biased model training, slower convergence, and increased communication overhead. On the other hand, traditional DNN architecture design, emphasizing validation accuracy, often overlooks computational efficiency and size constraints of edge devices. This research aims to develop a comprehensive framework that effectively balances trade-offs between model performance, communication efficiency, and the incorporation of FL into an iterative NAS algorithm. This framework aims to overcome challenges by addressing the specific requirements of FL, optimizing DNNs through NAS, and ensuring computational efficiency while considering the network constraints of edge devices. To address these challenges, we introduce Network-Aware Federated Neural Architecture Search (NAFNAS), an open-source federated neural network pruning framework with network emulation support. Through comprehensive testing, we demonstrate the feasibility of our approach, efficiently reducing DNN size and mitigating communication challenges. Additionally, we propose Network and Distribution Aware Client Grouping (NetDAG), a novel client grouping algorithm tailored for FL with diverse DNN architectures, considerably enhancing efficiency of communication rounds and update balance.
引用
收藏
页数:15
相关论文
共 87 条
[1]  
Abedi M, 2022, Arxiv, DOI arXiv:2209.08625
[2]  
Ahrenholz J, 2011, 2011 - MILCOM 2011 MILITARY COMMUNICATIONS CONFERENCE, P1870, DOI 10.1109/MILCOM.2011.6127585
[3]   Comparison of CORE Network Emulation Platforms [J].
Ahrenholz, Jeff .
MILITARY COMMUNICATIONS CONFERENCE, 2010 (MILCOM 2010), 2010, :166-171
[4]  
Ahrenholz J, 2008, IEEE MILIT COMMUN C, P3856
[5]   AN EVOLUTIONARY ALGORITHM THAT CONSTRUCTS RECURRENT NEURAL NETWORKS [J].
ANGELINE, PJ ;
SAUNDERS, GM ;
POLLACK, JB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (01) :54-65
[6]  
Banner R, 2019, ADV NEUR IN, V32
[7]  
Bender G, 2018, PR MACH LEARN RES, V80
[8]  
Beutel DJ, 2022, Arxiv, DOI arXiv:2007.14390
[9]  
Bonawitz K., 2019, Proc. Mach. Learn. Res, V1, P374, DOI 10.48550/arXiv.1902.01046
[10]  
McMahan HB, 2018, Arxiv, DOI arXiv:1710.06963