Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks

被引:175
作者
Sun, Yaohua [1 ]
Peng, Mugen [1 ]
Mao, Shiwen [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Minist Educ, Key Lab Universal Wireless Commun, Beijing 100876, Peoples R China
[2] Auburn Univ, Dept Elect & Comp Engn, Auburn, AL 36849 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Artificial intelligence; communication mode selection; deep reinforcement learning (DRL); fog radio access networks (F-RANs); resource management; OPTIMIZATION; ALLOCATION; FRAMEWORK;
D O I
10.1109/JIOT.2018.2871020
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fog radio access networks (F-RANs) are seen as potential architectures to support services of Internet of Things by leveraging edge caching and edge computing. However, current works studying resource management in F-RANs mainly consider a static system with only one communication mode. Given network dynamics, resource diversity, and the coupling of resource management with mode selection, resource management in F-RANs becomes very challenging. Motivated by the recent development of artificial intelligence, a deep reinforcement learning (DRL)-based joint mode selection and resource management approach is proposed. Each user equipment (UE) can operate either in cloud RAN (C-RAN) mode or in device-to-device mode, and the resource managed includes both radio resource and computing resource. The core idea is that the network controller makes intelligent decisions on UE communication modes and processors' on-off states with precoding for UEs in C-RAN mode optimized subsequently, aiming at minimizing long-term system power consumption under the dynamics of edge cache states. By simulations, the impacts of several parameters, such as learning rate and edge caching service capability, on system performance are demonstrated, and meanwhile the proposal is compared with other different schemes to show its effectiveness. Moreover, transfer learning is integrated with DRL to accelerate learning process.
引用
收藏
页码:1960 / 1971
页数:12
相关论文
共 39 条
[31]   Fog Assisted-IoT Enabled Patient Health Monitoring in Smart Homes [J].
Verma, Prabal ;
Sood, Sandeep K. .
IEEE INTERNET OF THINGS JOURNAL, 2018, 5 (03) :1789-1796
[32]   RF Sensing in the Internet of Things: A General Deep Learning Framework [J].
Wang, Xuyu ;
Wang, Xiangyu ;
Mao, Shiwen .
IEEE COMMUNICATIONS MAGAZINE, 2018, 56 (09) :62-67
[33]   CSI-Based Fingerprinting for Indoor Localization: A Deep Learning Approach [J].
Wang, Xuyu ;
Gao, Lingjun ;
Mao, Shiwen ;
Pandey, Santosh .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2017, 66 (01) :763-776
[34]   Physical-Social-Aware D2D Content Sharing Networks: A Provider-Demander Matching Game [J].
Wu, Dan ;
Zhou, Liang ;
Cai, Yueming ;
Chao, Han-Chieh ;
Qian, Yi .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (08) :7538-7549
[35]   Joint Mode Selection and Resource Allocation for Downlink Fog Radio Access Networks Supported D2D [J].
Xiang, Hongyu ;
Peng, Mugen ;
Cheng, Yuanyuan ;
Chen, Hsiao-Hwa .
PROCEEDINGS OF THE 11TH EAI INTERNATIONAL CONFERENCE ON HETEROGENEOUS NETWORKING FOR QUALITY, RELIABILITY, SECURITY AND ROBUSTNESS, 2015, :177-182
[36]  
Xu ZY, 2017, IEEE ICC
[37]   An Evolutionary Game for User Access Mode Selection in Fog Radio Access Networks [J].
Yan, Shi ;
Peng, Mugen ;
Abana, Munzali Ahmed ;
Wang, Wenbo .
IEEE ACCESS, 2017, 5 :2200-2210
[38]   On Reducing IoT Service Delay via Fog Offloading [J].
Yousefpour, Ashkan ;
Ishigaki, Genya ;
Gour, Riti ;
Jue, Jason P. .
IEEE INTERNET OF THINGS JOURNAL, 2018, 5 (02) :998-1010
[39]   Joint Mode Selection and Resource Allocation for Device-to-Device Communications [J].
Yu, Guanding ;
Xu, Lukai ;
Feng, Daquan ;
Yin, Rui ;
Li, Geoffrey Ye ;
Jiang, Yuhuan .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2014, 62 (11) :3814-3824