Collaborative Computing in Non-Terrestrial Networks: A Multi-Time-Scale Deep Reinforcement Learning Approach

被引:2
作者
Cao, Yang [1 ]
Lien, Shao-Yu [2 ]
Liang, Ying-Chang [3 ]
Niyato, Dusit [4 ]
Shen, Xuemin [5 ]
机构
[1] Southwest Jiaotong Univ, Sch Informat Sci & Technol, Chengdu 611756, Peoples R China
[2] Natl Yang Ming Chiao Tung Univ, Inst Intelligent Syst, Tainan 711, Taiwan
[3] Univ Elect Sci & Technol China, Ctr Intelligent Networking & Commun CINC, Chengdu 611731, Peoples R China
[4] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[5] Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON, Canada
关键词
Low earth orbit satellites; Satellite broadcasting; Satellites; Optimization; Convergence; Resource management; 3GPP; Non-terrestrial networks (NTNs); earth-fixed cell; beam management; resource allocation; deep reinforcement learning (DRL); multi-time-scale Markov decision process (MMDPs);
D O I
10.1109/TWC.2023.3323554
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Constructing earth-fixed cells with low-earth orbit (LEO) satellites in non-terrestrial networks (NTNs) has been the most promising paradigm to enable global coverage. The limited computing capabilities on LEO satellites however render tackling resource optimization within a short duration a critical challenge. Although the sufficient computing capabilities of the ground infrastructures can be utilized to assist the LEO satellite, different time-scale control cycles and coupling decisions between the space- and ground-segments still obstruct the joint optimization design for computing agents at different segments. To address the above challenges, in this paper, a multi-time-scale deep reinforcement learning (DRL) scheme is developed for achieving the radio resource optimization in NTNs, in which the LEO satellite and user equipment (UE) collaborate with each other to perform individual decision-making tasks with different control cycles. Specifically, the UE updates its policy toward improving value functions of both the satellite and UE, while the LEO satellite only performs finite-step rollout for decision-makings based on the reference decision trajectory provided by the UE. Most importantly, rigorous analysis to guarantee the performance convergence of the proposed scheme is provided. Comprehensive simulations are conducted to justify the effectiveness of the proposed scheme in balancing the transmission performance and computational complexity.
引用
收藏
页码:4932 / 4949
页数:18
相关论文
共 36 条
[1]   Reinforcement Learning for Self Organization and Power Control of Two-Tier Heterogeneous Networks [J].
Amiri, Roohollah ;
Almasi, Mojtaba Ahmadi ;
Andrews, Jeffrey G. ;
Mehrpouyan, Hani .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2019, 18 (08) :3933-3947
[2]  
[Anonymous], 2021, arXiv
[3]  
[Anonymous], 2021, Solutions for NR to Support Non-Terrestrial Networks
[4]  
[Anonymous], 2019, Study on Channel Model for Frequencies From 0.5 to 100 GHz
[5]  
Bertsekas D. P., 2019, Reinforcement Learning and Optimal Control
[6]   Multiagent Reinforcement Learning: Rollout and Policy Iteration [J].
Bertsekas, Dimitri .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2021, 8 (02) :249-272
[7]  
Bester J., 2019, arXiv
[8]   Stochastic approximation with two time scales [J].
Borkar, VS .
SYSTEMS & CONTROL LETTERS, 1997, 29 (05) :291-294
[9]   Collaborative Deep Reinforcement Learning for Resource Optimization in Non-Terrestrial Networks [J].
Cao, Yang ;
Lien, Shao-Yu ;
Liang, Ying-Chang ;
Niyato, Dusit ;
Shen, Xuemin .
2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
[10]   Multi-tier Collaborative Deep Reinforcement Learning for Non-terrestrial Network Empowered Vehicular Connections [J].
Cao, Yang ;
Lien, Shao-Yu ;
Liang, Ying-Chang .
2021 IEEE 29TH INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP 2021), 2021,