Online QoS/QoE-Driven SFC Orchestration Leveraging a DRL Approach in SDN/NFV Enabled Networks

被引:1
|
作者
Escheikh, Mohamed [1 ]
Taktak, Wiem [1 ]
机构
[1] Univ Tunis El Manar, Syscom Lab, ENIT Tunis, BP 37, Tunis 1002, Tunisia
关键词
SDN/NFV; QoS/QoE; SFC orchestration; DRL; Double DQN; RESOURCE OPTIMIZATION; FUNCTION PLACEMENT; NFV;
D O I
10.1007/s11277-024-11389-5
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
The proliferation of the ever-increasing number of highly heterogeneous smart devices and the emerging of a wide range of diverse applications in 5G mobile network ecosystems impose to tackle new set of raising challenges related to agile and automated service orchestration and management. Fully leveraging key enablers technologies such as software defined network, network function virtualization and machine learning capabilities in such environment is of paramount importance to address service function chaining (SFC) orchestration issues according to user requirements and network constraints. To meet these challenges, we propose in this paper a deep reinforcement learning (DRL) approach to investigate online quality of experience (QoE)/quality of service (QoS) aware SFC orchestration problem. The objective is to fulfill intelligent, elastic and automated virtual network functions deployment optimizing QoE while respecting QoS constraints. We implement DRL approach through Double Deep-Q-Network algorithm. We investigate experimental simulations to apprehend agent behavior along a learning phase followed by a testing and evaluation phase for two physical substrate network scales. The testing phase is defined as the last 100 runs of the learning phase where agent reaches on average QoE threshold score (QoETh-Sc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$QoE_{Th-Sc}$$\end{document}). In a first set of experiments, we highlight the impact of hyper-parameters (Learning Rate (LR) and Batch Size (BS)) tuning on better solving sequential decision problem related to SFC orchestration for a given QoETh-Sc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$QoE_{Th-Sc}$$\end{document}. This investigation leads us to choose the more suitable pair (LR, BS) enabling acceptable learning quality. In a second set of experiments we examine DRL agent capacity to enhance learning quality while meeting performance-convergence compromise. This is achieved by progressively increasing QoETh-Sc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$QoE_{Th-Sc}$$\end{document}.
引用
收藏
页码:1511 / 1538
页数:28
相关论文
共 31 条
  • [21] Deep Reinforcement Learning Approach to QoE-Driven Resource Allocation for Spectrum Underlay in Cognitive Radio Networks
    Shah-Mohammadi, Fatemeh
    Kwasinski, Andres
    2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2018,
  • [22] A game-theoretic learning approach to QoE-driven resource allocation scheme in 5G-enabled IoT
    Haibo Dai
    Haiyang Zhang
    Wei Wu
    Baoyun Wang
    EURASIP Journal on Wireless Communications and Networking, 2019
  • [23] A game-theoretic learning approach to QoE-driven resource allocation scheme in 5G-enabled IoT
    Dai, Haibo
    Zhang, Haiyang
    Wu, Wei
    Wang, Baoyun
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2019, 2019 (1)
  • [24] On-Policy Versus Off-Policy Reinforcement Learning for Multi-Domain SFC Embedding in SDN/NFV-Enabled Networks
    Zhao, Donghao
    Shi, Wei
    Lu, Yu
    Li, Xi
    Liu, Yicen
    IEEE ACCESS, 2024, 12 : 123049 - 123070
  • [25] Cost-Aware Dynamic SFC Mapping and Scheduling in SDN/NFV-Enabled Space-Air-Ground-Integrated Networks for Internet of Vehicles
    Li, Junling
    Shi, Weisen
    Wu, Huaqing
    Zhang, Shan
    Shen, Xuemin
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (08): : 5824 - 5838
  • [26] Dynamic Service Function Chain Orchestration for NFV/MEC-Enabled IoT Networks: A Deep Reinforcement Learning Approach
    Liu, Yicen
    Lu, Hao
    Li, Xi
    Zhang, Yang
    Xi, Leiping
    Zhao, Donghao
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (09) : 7450 - 7465
  • [27] A power-efficient and performance-aware online virtual network function placement in SDN/NFV-enabled networks
    Zahedi, Seyed Reza
    Jamali, Shahram
    Bayat, Peyman
    COMPUTER NETWORKS, 2022, 205
  • [28] DeepSelector: A Deep Learning-Based Virtual Network Function Placement Approach in SDN/NFV-Enabled Networks
    Yue, Yi
    Tang, Xiongyan
    Liang, Ying-Chang
    Cao, Chang
    Xu, Lexi
    Yang, Wencong
    Zhang, Zhiyan
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (03) : 1759 - 1773
  • [29] A Buffer-Aware QoS Streaming Approach for SDN-Enabled 5G Vehicular Networks
    Lai, Chin-Feng
    Chang, Yao-Chung
    Chao, Han-Chieh
    Hossain, M. Shamim
    Ghoneim, Ahmed
    IEEE COMMUNICATIONS MAGAZINE, 2017, 55 (08) : 68 - 73
  • [30] Proactive and AoI-Aware Failure Recovery for Stateful NFV-Enabled Zero-Touch 6G Networks: Model-Free DRL Approach
    Shaghaghi, Amirhossein
    Zakeri, Abolfazl
    Mokari, Nader
    Javan, Mohammad Reza
    Behdadfar, Mohammad
    Jorswieck, Eduard A.
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (01): : 437 - 451