Deep Reinforcement Learning-Based Multipath Routing for LEO Megaconstellation Networks

被引:2
|
作者
Han, Chi [1 ]
Xiong, Wei [1 ,2 ]
Yu, Ronghuan [1 ,2 ]
机构
[1] Space Engn Univ, Natl Key Lab Space Target Awareness, Beijing 101400, Peoples R China
[2] Space Engn Univ, Sch Space Informat, Beijing 101400, Peoples R China
关键词
satellite network; multipath routing; deep reinforcement learning; traffic scheduling; hop count; GRAPH NEURAL-NETWORKS; TRAFFIC CONTROL; OPTIMIZATION; CHALLENGES;
D O I
10.3390/electronics13153054
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The expansion of megaconstellation networks (MCNs) represents a promising solution for achieving global Internet coverage. To meet the growing demand for satellite services, multipath routing allows the simultaneous establishment of multiple transmission paths, enabling the transmission of flows in parallel. Nevertheless, the mobility of satellites and time-varying link states presents a challenge for the discovery of optimal paths and traffic scheduling in multipath routing. Given the inflexibility of traditional static deep reinforcement learning (DRL)-based routing algorithms in dealing with time-varying constellation topologies, DRL-based multipath routing (DMR) enabled by a graph neural network (GNN) is proposed as a means of enhancing the transmission performance of MCNs. DMR decouples the stochastic optimization problem of multipath routing under traffic and bandwidth constraints into two subproblems: multipath routing discovery and multipath traffic scheduling. Firstly, the minimum hop count-based multipath route discovery algorithm (MHMRD) is proposed for the computation of multiple available paths between all source and destination nodes. Secondly, the GNN-based multipath traffic scheduling scheme (GMTS) is proposed as a means of dynamically scheduling the traffic on each available path for each data stream, based on the state information of ISLs and traffic demand. Simulation results demonstrate that the proposed scheme can be scaled to constellations with different configurations without the necessity for repeated training and enhance the throughput, completion ratio, and delay by 42.64%, 17.39%, and 3.66% in comparison with the shortest path first algorithm (SPF), respectively.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] Deep Reinforcement Learning-Based Relay Selection in Intelligent Reflecting Surface Assisted Cooperative Networks
    Huang, Chong
    Chen, Gaojie
    Gong, Yu
    Wen, Miaowen
    Chambers, Jonathon A.
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (05) : 1036 - 1040
  • [42] DeepRLB: A deep reinforcement learning-based load balancing in data center networks
    Rikhtegar, Negar
    Bushehrian, Omid
    Keshtgari, Manijeh
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2021, 34 (15)
  • [43] Distributed Deep Reinforcement Learning-Based Spectrum and Power Allocation for Heterogeneous Networks
    Yang, Helin
    Zhao, Jun
    Lam, Kwok-Yan
    Xiong, Zehui
    Wu, Qingqing
    Xiao, Liang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (09) : 6935 - 6948
  • [44] Deep Reinforcement Learning-based Spectrum Allocation and Power Management for IAB Networks
    Cheng, Qingqing
    Wei, Zhiqiang
    Yuan, Jinhong
    2021 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2021,
  • [45] Deep Reinforcement Learning-Based Radar Network Target Assignment
    Meng, Fanqing
    Tian, Kangsheng
    Wu, Changfei
    IEEE SENSORS JOURNAL, 2021, 21 (14) : 16315 - 16327
  • [46] Deep reinforcement learning-based approach for rumor influence minimization in social networks
    Jiang, Jiajian
    Chen, Xiaoliang
    Huang, Zexia
    Li, Xianyong
    Du, Yajun
    APPLIED INTELLIGENCE, 2023, 53 (17) : 20293 - 20310
  • [47] A Deep Reinforcement Learning-Based Caching Strategy for IoT Networks With Transient Data
    Wu, Hongda
    Nasehzadeh, Ali
    Wang, Ping
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (12) : 13310 - 13319
  • [48] Deep Reinforcement Learning-Based RMSA Policy Distillation for Elastic Optical Networks
    Tang, Bixia
    Huang, Yue-Cai
    Xue, Yun
    Zhou, Weixing
    MATHEMATICS, 2022, 10 (18)
  • [49] DEEP REINFORCEMENT LEARNING-BASED IRRIGATION SCHEDULING
    Yang, Y.
    Hu, J.
    Porter, D.
    Marek, T.
    Heflin, K.
    Kong, H.
    Sun, L.
    TRANSACTIONS OF THE ASABE, 2020, 63 (03) : 549 - 556
  • [50] Deep reinforcement learning-based approach for rumor influence minimization in social networks
    Jiajian Jiang
    Xiaoliang Chen
    Zexia Huang
    Xianyong Li
    Yajun Du
    Applied Intelligence, 2023, 53 : 20293 - 20310