Parallel and Memory-Efficient Distributed Edge Learning in B5G IoT Networks

被引:4
作者
Zhao, Jianxin [1 ]
Vandenhove, Pierre [2 ,3 ,4 ]
Xu, Peng [1 ]
Tao, Hao [5 ]
Wang, Liang [6 ]
Liu, Chi Harold [1 ]
Crowcroft, Jon [6 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[2] FRS FNRS, UMONS, B-7000 Mons, Belgium
[3] Univ Paris Saclay, CNRS, LMF, F-91190 Gif sur yvette, France
[4] Univ Cambridge, OCaml Labs, Cambridge, England
[5] China Ship Dev & Design Ctr, Wuhan 430064, Peoples R China
[6] Univ Cambridge, Cambridge CB3 0FD, England
关键词
Edge learning; beyond; 5G; memory efficient; backpropagation;
D O I
10.1109/JSTSP.2022.3223759
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Nowadays we are witnessing rapid development of the Internet of Things (IoT), machine learning, and cellular network technologies. They are key components to promote wireless networks beyond 5G (B5G). The plenty of data generated from numerous IoT devices, such as smart sensors and mobile devices, can be utilised to train intelligent models. But it still remains a challenge to efficiently utilise IoT networks and edge in B5G to conduct model training. In this paper, we propose a parallel training method which uses operators as scheduling units during training task assignment. Besides, we discuss a pebble-game-based memory-efficient optimisation in training. Experiments based on various real world network architectures show the flexibility of our proposed method and good performance compared with state of the art.
引用
收藏
页码:222 / 233
页数:12
相关论文
empty
未找到相关数据