Learning State-Augmented Policies for Information Routing in Communication Networks

被引:0
作者
Das, Sourajit [1 ]
Naderializadeh, Navid [3 ]
Ribeiro, Alejandro [2 ]
机构
[1] Univ Penn, Philadelphia, PA 19104 USA
[2] Univ Penn, Elect & Syst Engn, Philadelphia, PA 19104 USA
[3] Duke Univ, Dept Biostatist & Bioinformat, Durham, NC 27705 USA
关键词
Routing; Communication networks; Graph neural networks; Optimization; Resource management; Vectors; Training; Wireless networks; Convergence; Channel capacity; Information routing; communication networks; state augmentation; graph neural networks; unsupervised learning; POWER ALLOCATION; WIRELESS;
D O I
10.1109/TSP.2024.3516556
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper examines the problem of information routing in a large-scale communication network, which can be formulated as a constrained statistical learning problem having access to only local information. We delineate a novel State Augmentation (SA) strategy to maximize the aggregate information at source nodes using graph neural network (GNN) architectures, by deploying graph convolutions over the topological links of the communication network. The proposed technique leverages only the local information available at each node and efficiently routes desired information to the destination nodes. We leverage an unsupervised learning procedure to convert the output of the GNN architecture to optimal information routing strategies. In the experiments, we perform the evaluation on real-time network topologies to validate our algorithms. Numerical simulations depict the improved performance of the proposed method in training a GNN parameterization as compared to baseline algorithms.
引用
收藏
页码:204 / 218
页数:15
相关论文
共 50 条
[11]   Convolutional Neural Network Architectures for Signals Supported on Graphs [J].
Gama, Fernando ;
Marques, Antonio G. ;
Leus, Geert ;
Ribeiro, Alejandro .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (04) :1034-1049
[12]   A Multi-agent Reinforcement Learning Perspective on Distributed Traffic Engineering [J].
Geng, Nan ;
Lan, Tian ;
Aggarwal, Vaneet ;
Yang, Yuan ;
Xu, Mingwei .
2020 IEEE 28TH INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (IEEE ICNP 2020), 2020,
[13]  
Georgiadis L, 2006, FOUND TRENDS NETW, V1
[14]   Machine Learning Techniques in Radio-over-Fiber Systems and Networks [J].
He, Jiayuan ;
Lee, Jeonghun ;
Kandeepan, Sithamparanathan ;
Wang, Ke .
PHOTONICS, 2020, 7 (04) :1-31
[15]  
Henaff M, 2015, Arxiv, DOI [arXiv:1506.05163, DOI 10.48550/ARXIV.1506.05163]
[16]   Caffe: Convolutional Architecture for Fast Feature Embedding [J].
Jia, Yangqing ;
Shelhamer, Evan ;
Donahue, Jeff ;
Karayev, Sergey ;
Long, Jonathan ;
Girshick, Ross ;
Guadarrama, Sergio ;
Darrell, Trevor .
PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, :675-678
[17]   The Internet Topology Zoo [J].
Knight, Simon ;
Nguyen, Hung X. ;
Falkner, Nickolas ;
Bowden, Rhys ;
Roughan, Matthew .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2011, 29 (09) :1765-1775
[18]   Graph Embedding-Based Wireless Link Scheduling With Few Training Samples [J].
Lee, Mengyuan ;
Yu, Guanding ;
Li, Geoffrey Ye .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (04) :2282-2294
[19]  
Lei L, 2017, INT SYM WIRELESS COM, P449, DOI 10.1109/ISWCS.2017.8108157
[20]   Deep-Learning-Based Wireless Resource Allocation With Application to Vehicular Networks [J].
Liang, Le ;
Ye, Hao ;
Yu, Guanding ;
Li, Geoffrey Ye .
PROCEEDINGS OF THE IEEE, 2020, 108 (02) :341-356