Benchmarking Q-Learning Methods for Intelligent Network Orchestration in the Edge

被引:0
作者
Reijonen, Joel [1 ]
Opsenica, Miljenko [1 ]
Kauppinen, Tero [1 ]
Komu, Miika [1 ]
Kjallman, Jimmy [1 ]
Mecklin, Tomas [1 ]
Hiltunen, Eero [1 ]
Arkko, Jan [1 ]
Simanainen, Timo [1 ]
Elmusrati, Mohammed [2 ]
机构
[1] Ericsson Res, Jorvas, Finland
[2] Univ Vaasa, Vaasa, Finland
来源
2020 2ND 6G WIRELESS SUMMIT (6G SUMMIT) | 2020年
关键词
Q-learning; edge; intelligent orchestration;
D O I
10.1109/6gsummit49458.2020.9083745
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We benchmark Q-learning methods, with various action selection strategies, in intelligent orchestration of the network edge. Q-learning is a reinforcement learning technique that aims to find optimal action policies by taking advantage of the experiences in the past without utilizing a model that describes the dynamics of the environment. With experiences, we refer to the observed causality between the action and the corresponding impact to the environment. In this paper, the environment for Q-learning is composed of virtualized networking resources along with their dynamics that are monitored with Spindump, an in-network latency measurement tool with support for QUIC and TCP. We optimize the orchestration of these networking resources by introducing Q-learning as part of the machine learning driven, intelligent orchestration that is applicable in the edge. Based on the benchmarking results, we identify which action selection strategies support network orchestration that provides low latency and packet loss by considering network resource allocation in the edge.
引用
收藏
页数:5
相关论文
共 17 条
  • [1] Automated Network Service Scaling in NFV: Concepts, Mechanisms and Scaling Workflow
    Adamuz-Hinojosa, Oscar
    Ordonez-Lucena, Jose
    Ameigeiras, Pablo
    Ramos-Munoz, Juan J.
    Lopez, Diego
    Folgueira, Jesus
    [J]. IEEE COMMUNICATIONS MAGAZINE, 2018, 56 (07) : 162 - 169
  • [2] Alpaydin E., 2010, Introduction to Machine Learning, VSecond , (p, P3
  • [3] [Anonymous], 2019, Docker Documentation
  • [4] [Anonymous], 2020, Keras: The Python Deep Learning library
  • [5] Arkko J., 2019, GEARING UP MODERN IN
  • [6] Duan Y, 2016, PR MACH LEARN RES, V48
  • [7] Gomes E. Rodrigues, 2009, P 26 ANN INT C MACH, P369
  • [8] Jin C., 2018, Is q-learning provably efficient?
  • [9] Kianercy A., 2011, DYNAMICS BOLTZMANN Q
  • [10] Dielectric properties of thin Cr2O3 films grown on elemental and oxide metallic substrates
    Mahmood, Ather
    Street, Michael
    Echtenkamp, Will
    Kwan, Chun Pui
    Bird, Jonathan P.
    Binek, Christian
    [J]. PHYSICAL REVIEW MATERIALS, 2018, 2 (04):