A Machine Learning Framework for Resource Allocation Assisted by Cloud Computing

被引:60
作者
Wang, Jun-Bo [1 ]
Wang, Junyuan [2 ,6 ]
Wu, Yongpeng [5 ]
Wang, Jin-Yuan
Zhu, Huiling [3 ]
Lin, Min [7 ]
Wang, Jiangzhou [4 ]
机构
[1] Southeast Univ, Nanjing, Jiangsu, Peoples R China
[2] Edge Hill Univ, Dept Comp Sci, Ormskirk, England
[3] Univ Kent, Sch Engn & Digital Arts, Canterbury, Kent, England
[4] Univ Kent, Sch Engn & Digital Arts, Telecommun, Canterbury, Kent, England
[5] Shanghai Jiao Tong Univ, Dept Elect Engn, Shanghai, Peoples R China
[6] Nanjing Univ Posts & Telecommun, Peter Grunberg Res Ctr, Nanjing, Jiangsu, Peoples R China
[7] Nanjing Univ Posts & Telecommun, Nanjing, Jiangsu, Peoples R China
来源
IEEE NETWORK | 2018年 / 32卷 / 02期
基金
中国国家自然科学基金;
关键词
D O I
10.1109/MNET.2018.1700293
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Conventionally, resource allocation is formulated as an optimization problem and solved online with instantaneous scenario information. Since most resource allocation problems are not convex, the optimal solutions are very difficult to obtain in real time. Lagrangian relaxation or greedy methods are then often employed, which results in performance loss. Therefore, the conventional methods of resource allocation are facing great challenges to meet the ever increasing QoS requirements of users with scarce radio resource. Assisted by cloud computing, a huge amount of historical data on scenarios can be collected for extracting similarities among scenarios using machine learning. Moreover, optimal or near-optimal solutions of historical scenarios can be searched offline and stored in advance. When the measured data of a scenario arrives, the current scenario is compared with historical scenarios to find the most similar one. Then the optimal or near-optimal solution in the most similar historical scenario is adopted to allocate the radio resources for the current scenario. To facilitate the application of new design philosophy, a machine learning framework is proposed for resource allocation assisted by cloud computing. An example of beam allocation in multi-user massive MIMO systems shows that the proposed machine-learning-based resource allocation outperforms conventional methods.
引用
收藏
页码:144 / 151
页数:8
相关论文
共 17 条
[1]  
Alpaydin E, 2014, ADAPT COMPUT MACH LE, P115
[2]  
Bi SZ, 2015, IEEE COMMUN MAG, V53, P190, DOI 10.1109/MCOM.2015.7295483
[3]  
Bottou L., TRAFFIC ENG
[4]  
Buter J., 1962, ELECT DESIGN
[5]   An Overview on Resource Allocation Techniques for Multi-User MIMO Systems [J].
Castaneda, Eduardo ;
Silva, Adao ;
Gameiro, Atilio ;
Kountouris, Marios .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2017, 19 (01) :239-284
[6]  
Chen X., IEEE COMMUN SURVEYS, V16, P1180
[7]   State-of-the-Art Deep Learning: Evolving Machine Intelligence Toward Tomorrow's Intelligent Network Traffic Control Systems [J].
Fadlullah, Zubair Md. ;
Tang, Fengxiao ;
Mao, Bomin ;
Kato, Nei ;
Akashi, Osamu ;
Inoue, Takeru ;
Mizutani, Kimihiro .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2017, 19 (04) :2432-2455
[8]  
Francois D., 2007, THESIS
[9]  
Kato N., 2017, IEEE WIRELESS COMMUN, V24
[10]   An introduction to convex optimization for communications and signal processing [J].
Luo, Zhi-Quan ;
Yu, Wei .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2006, 24 (08) :1426-1438