QoE-Based Task Offloading With Deep Reinforcement Learning in Edge-Enabled Internet of Vehicles

被引:92
作者
He, Xiaoming [1 ]
Lu, Haodong [2 ]
Du, Miao [2 ]
Mao, Yingchi [1 ]
Wang, Kun [3 ]
机构
[1] Hohai Univ, Coll Comp & Informat, Nanjing 210098, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Coll Internet Things, Nanjing 210003, Peoples R China
[3] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA 90095 USA
基金
中国国家自然科学基金;
关键词
Task analysis; Quality of experience; Servers; Training; Computational modeling; Energy consumption; Convergence; Internet of vehicles (IoV); edge; task offloading; deep deterministic policy gradients (DDPG); QoE; RESOURCE-ALLOCATION;
D O I
10.1109/TITS.2020.3016002
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
In the transportation industry, task offloading services of edge-enabled Internet of Vehicles (IoV) are expected to provide vehicles with the better Quality of Experience (QoE). However, the various status of diverse edge servers and vehicles, as well as varying vehicular offloading modes, make a challenge of task offloading service. Therefore, to enhance the satisfaction of QoE, we first introduce a novel QoE model. Specifically, the emerging QoE model restricted by the energy consumption: 1) intelligent vehicles equipped with caching spaces and computing units may work as carriers; 2) various computational and caching capacities of edge servers can empower the offloading; and 3) unpredictable routings of the vehicles and edge servers can lead to diverse information transmission. We then propose an improved deep reinforcement learning (DRL) algorithm named PS-DDPG with the prioritized experience replay (PER) and the stochastic weight averaging (SWA) mechanisms based on deep deterministic policy gradients (DDPG) to seek an optimal offloading mode, saving energy consumption. Specifically, the PER scheme is proposed to enhance the availability of the experience replay buffer, thus accelerating the training. Moreover, reducing the noise in the training process and thus stabilizing the rewards, the SWA scheme is introduced to average weights. Extensive experiments certify the better performance, i.e., stability and convergence, of our PS-DDPG algorithm compared to existing work. Moreover, the experiments indicate that the QoE value can be improved by the proposed algorithm.
引用
收藏
页码:2252 / 2261
页数:10
相关论文
共 28 条
[1]   iRAF: A Deep Reinforcement Learning Approach for Collaborative Mobile Edge Computing IoT Networks [J].
Chen, Jienan ;
Chen, Siyu ;
Wang, Qi ;
Cao, Bin ;
Feng, Gang ;
Hu, Jianhao .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (04) :7011-7024
[2]   Efficient Resource Allocation for On-Demand Mobile-Edge Cloud Computing [J].
Chen, Xu ;
Li, Wenzhong ;
Lu, Sanglu ;
Zhou, Zhi ;
Fu, Xiaoming .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (09) :8769-8780
[3]   Deep Reinforcement Learning and Permissioned Blockchain for Content Caching in Vehicular Edge Computing and Networks [J].
Dai, Yueyue ;
Xu, Du ;
Zhang, Ke ;
Maharjan, Sabita ;
Zhang, Yan .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (04) :4312-4324
[4]   Big Data Privacy Preserving in Multi-Access Edge Computing for Heterogeneous Internet of Things [J].
Du, Miao ;
Wang, Kun ;
Chen, Yuanfang ;
Wang, Xiaoyan ;
Sun, Yanfei .
IEEE COMMUNICATIONS MAGAZINE, 2018, 56 (08) :62-67
[5]   Deep Reinforcement Learning based VNF Management in Geo-distributed Edge Computing [J].
Gu, Lin ;
Zeng, Deze ;
Li, Wei ;
Guo, Song ;
Zomaya, Albert Y. ;
Jin, Hai .
2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019), 2019, :934-943
[6]   QoE-Driven Content-Centric Caching With Deep Reinforcement Learning in Edge-Enabled IoT [J].
He, Xiaoming ;
Wang, Kun ;
Xu, Wenyao .
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2019, 14 (04) :12-20
[7]   Green Resource Allocation Based on Deep Reinforcement Learning in Content-Centric IoT [J].
He, Xiaoming ;
Wang, Kun ;
Huang, Huawei ;
Miyazaki, Toshiaki ;
Wang, Yixuan ;
Guo, Song .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2020, 8 (03) :781-796
[8]   QoE-Driven Big Data Architecture for Smart City [J].
He, Xiaoming ;
Wang, Kun ;
Huang, Huawei ;
Liu, Bo .
IEEE COMMUNICATIONS MAGAZINE, 2018, 56 (02) :88-93
[9]   QoE-Aware Computation Offloading to Capture Energy-Latency-Pricing Tradeoff in Mobile Clouds [J].
Hong, Sung-Tae ;
Kim, Hyoil .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2019, 18 (09) :2174-2189
[10]  
Hou YN, 2017, IEEE SYS MAN CYBERN, P316, DOI 10.1109/SMC.2017.8122622