QoS based Deep Reinforcement Learning for V2X Resource Allocation

被引:7
作者
Bhadauria, Shubhangi [1 ]
Shabbir, Zohaib [2 ]
Roth-Mandutz, Elke [1 ]
Fischer, Georg [3 ]
机构
[1] Fraunhofer IIS, Erlangen, Germany
[2] Altran Technol, Wolfsburg, Germany
[3] Friedrich Alexander Univ FAU, Erlangen, Germany
来源
2020 IEEE INTERNATIONAL BLACK SEA CONFERENCE ON COMMUNICATIONS AND NETWORKING (BLACKSEACOM) | 2020年
关键词
V2X communication; QoS; DRL; resource allocation;
D O I
10.1109/blackseacom48709.2020.9234960
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The 3rd generation partnership project (3GPP) standard has introduced vehicle to everything (V2X) communication in Long Term Evolution (LTE) to pave the way for future intelligent transport solutions. V2X communication envisions to support a diverse range of use cases for e.g. cooperative collision avoidance, infotainment with stringent quality of service (QoS) requirements. The QoS requirements range from ultra-reliable low latency to high data rates depending on the supported application. This paper presents a QoS aware decentralized resource allocation for V2X communication based on a deep reinforcement learning (DRL) framework. The proposed scheme incorporates the independent QoS parameter, i.e. priority associated to each V2X message, that reflects the latency required in both user equipment (UE) and the base station. The goal of the approach is to maximize the throughput of all vehicle to infrastructure (V2I) links while meeting the latency constraints of vehicle to vehicle (V2V) links associated to the respective priority. A performance evaluation of the algorithm is conducted based on system level simulations for both urban and highway scenarios. The results show that incorporating the QoS parameter (i.e. priority) pertaining to the type of service supported is crucial in order to meet the latency requirements of the mission critical V2X applications.
引用
收藏
页数:6
相关论文
共 9 条
  • [1] 3GPP, 2016, 23303 3GPP TS
  • [2] [Anonymous], 2016, Asynchronous methods for deep reinforcement learning
  • [3] [Anonymous], 2017, 22186 3GPP TS
  • [4] [Anonymous], 2016, Technical Specifications of Remote Service and Management System for Electric VehiclesPart 3: Communication Protocol and Data Format
  • [5] Device-to-Device Communications in Cellular Networks
    Feng, Daquan
    Lu, Lu
    Yi Yuan-Wu
    Li, Geoffrey Ye
    Li, Shaoqian
    Feng, Gang
    [J]. IEEE COMMUNICATIONS MAGAZINE, 2014, 52 (04) : 49 - 55
  • [6] An Introduction to Deep Reinforcement Learning
    Francois-Lavet, Vincent
    Henderson, Peter
    Islam, Riashat
    Bellemare, Marc G.
    Pineau, Joelle
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2018, 11 (3-4): : 219 - 354
  • [7] Ruder S., 2016, CoRR, DOI DOI 10.48550/ARXIV.1609.04747
  • [8] Deep Reinforcement Learning Based Resource Allocation for V2V Communications
    Ye, Hao
    Li, Geoffrey Ye
    Juang, Biing-Hwang Fred
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (04) : 3163 - 3173
  • [9] Machine Learning for Vehicular Networks Recent Advances and Application Examples
    Ye, Hao
    Liang, Le
    Li, Geoffrey Ye
    Kim, JoonBeom
    Lu, Lu
    Wu, May
    [J]. IEEE VEHICULAR TECHNOLOGY MAGAZINE, 2018, 13 (02): : 94 - 101