Collaborative Computing in Vehicular Networks: A Deep Reinforcement Learning Approach

被引:15
作者
Li, Mushu [1 ]
Gao, Jie [1 ]
Zhang, Ning [2 ]
Zhao, Lian [3 ]
Shen, Xuemin [1 ]
机构
[1] Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON, Canada
[2] Texas A&M Univ Corpus Christi, Dept Comp Sci, Corpus Christi, TX USA
[3] Ryerson Univ, Dept Elect Comp & Biomed Engn, Toronto, ON, Canada
来源
ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC) | 2020年
关键词
mobile edge computing; vehicular network; deep reinforcement learning; computing offloading;
D O I
10.1109/icc40277.2020.9149333
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Mobile edge computing (MEC) has been recognized as a promising technology to support various emerging services in vehicular networks. With MEC, vehicle users can offload their computation-intensive applications (e.g., intelligent path planning and safety applications) to edge computing servers located at roadside units. In this paper, an efficient computing offloading and server collaboration approach is proposed to reduce computing service delay and improve service reliability for vehicle users. Task partition is adopted, whereby the computation load offloaded by a vehicle can be divided and distributed to multiple edge servers. By the proposed approach, the computation delay can be reduced by parallel computing, and the failure in computing results delivery can also be alleviated via cooperation among edges. The offloading and computing decision-making is formulated as a long-term planning problem, and a deep reinforcement learning technique, i.e., deep deterministic policy gradient, is adopted to achieve the optimal solution of the complex stochastic nonlinear integer optimization problem. Simulation results show that our collaborative computing approach can adapt to different service environments and outperform the greedy offloading approach.
引用
收藏
页数:6
相关论文
共 15 条
[1]  
Chen J., 2018, 2018 IEEE GLOBAL COM, P1
[2]   Contention Intensity Based Distributed Coordination for V2V Safety Message Broadcast [J].
Gao, Jie ;
Li, Mushu ;
Zhao, Lian ;
Shen, Xuemin .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (12) :12288-12301
[3]   Integrated Networking, Caching, and Computing for Connected Vehicles: A Deep Reinforcement Learning Approach [J].
He, Ying ;
Zhao, Nan ;
Yin, Hongxi .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (01) :44-55
[4]   Partial Offloading Scheduling and Power Allocation for Mobile Edge Computing Systems [J].
Kuang, Zhufang ;
Li, Linfeng ;
Gao, Jie ;
Zhao, Lian ;
Liu, Anfeng .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (04) :6774-6785
[5]   Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing [J].
Li, He ;
Ota, Kaoru ;
Dong, Mianxiong .
IEEE NETWORK, 2018, 32 (01) :96-101
[6]   A Novel Adaptive Resource Allocation Model Based on SMDP and Reinforcement Learning Algorithm in Vehicular Cloud System [J].
Liang, Hongbin ;
Zhang, Xiaohui ;
Zhang, Jin ;
Li, Qizhen ;
Zhou, Shuya ;
Zhao, Lian .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (10) :10018-10029
[7]  
Lillicrap Timothy P, 2016, P 4 INT C LEARNING R
[8]   Knowledge-Driven Service Offloading Decision for Vehicular Edge Computing: A Deep Reinforcement Learning Approach [J].
Qi, Qi ;
Wang, Jingyu ;
Ma, Zhanyu ;
Sun, Haifeng ;
Cao, Yufei ;
Zhang, Lingxin ;
Liao, Jianxin .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (05) :4192-4203
[9]   Adaptive Learning-Based Task Offloading for Vehicular Edge Computing Systems [J].
Sun, Yuxuan ;
Guo, Xueying ;
Song, Jinhui ;
Zhou, Sheng ;
Jiang, Zhiyuan ;
Liu, Xin ;
Niu, Zhisheng .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (04) :3061-3074
[10]   Follow-Me Cloud: When Cloud Services Follow Mobile Users [J].
Taleb, Tarik ;
Ksentini, Adlen ;
Frangoudis, Pantelis A. .
IEEE TRANSACTIONS ON CLOUD COMPUTING, 2019, 7 (02) :369-382