Fairness Analysis of Deep Reinforcement Learning based Multi-Path QUIC Scheduling

被引:2
作者
Quevedo, Ernesto [1 ]
Donahoo, Jeff [1 ]
Cerny, Tomas [1 ]
机构
[1] Baylor Univ, Waco, TX 76798 USA
来源
38TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2023 | 2023年
基金
美国国家科学基金会;
关键词
Multipath QUIC; Deep Reinforcement Learning; Fairness; CONGESTION CONTROL; TCP;
D O I
10.1145/3555776.3577658
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Computing devices with multiple active network interfaces, such as cellular, wired, and WiFi, are becoming more and more common. Typically, such devices select a single interface for communication, but throughput and availability can increase by utilizing multipath protocols. Multipath TCP (MPTCP) is the predominant protocol in this space; however, Multipath QUIC (MPQUIC) provides several advantages over MPTCP and is increasing in adoption. Multipath protocols use a multipath scheduler to determine which packets use which interface. Legacy schedulers exhibit good performance but often poorly handle adjusting to dynamic changes in the network. Recent research includes the development of several Deep Reinforcement Learning (DRL) based schedulers that outperform legacy schedulers and improve adaptability to changing network conditions. Evaluation of any packet scheduling approach must include an assessment of fairness to concurrent TCP flows. Specifically, under congestion conditions, all flows (multipath or unipath) should tend toward an equal share of the bandwidth. Unfortunately, MPQUIC DRL-based scheduler research does not include a rigorous analysis of the fairness aspect under various network conditions, risking significant network problems as adoption increases. We present an efficiency and fairness comparison of MPQUIC using DRL-based schedulers with classic agents like DQN, Deep SARSA, and Double DQN. Experimental results over a bi-path network show that these schedulers are TCP-friendly in many cases on both paths and converge to link-centric fairness on one path. However, our work shows that they are not TCP-friendly or can be bullied under certain conditions, degrading TCP or MPQUIC performance.
引用
收藏
页码:1772 / 1781
页数:10
相关论文
共 25 条
[1]   Deep Learning for Network Traffic Monitoring and Analysis (NTMA): A Survey [J].
Abbasi, Mahmoud ;
Shahraki, Amin ;
Taherkordi, Amir .
COMPUTER COMMUNICATIONS, 2021, 170 :19-41
[2]  
Becke M., 2012, IEEE International Conference on Communications (ICC 2012), P2666, DOI 10.1109/ICC.2012.6363695
[3]  
Bonaventure O, 2020, TCP Extensions for Multipath Operation with Multiple Addresses
[4]   A comprehensive survey on machine learning for networking: evolution, applications and research opportunities [J].
Boutaba, Raouf ;
Salahuddin, Mohammad A. ;
Limam, Noura ;
Ayoubi, Sara ;
Shahriar, Nashid ;
Estrada-Solano, Felipe ;
Caicedo, Oscar M. .
JOURNAL OF INTERNET SERVICES AND APPLICATIONS, 2018, 9 (01)
[5]  
Braden B., 1998, RECOMMENDATIONS QUEU, DOI DOI 10.17487/RFC2309
[6]  
Chinchali S, 2018, AAAI CONF ARTIF INTE, P766
[7]  
Chu TT, 2022, IEEE ICC, P4908, DOI 10.1109/ICC45855.2022.9838658
[8]   Multipath QUIC: Design and Evaluation [J].
De Coninck, Quentin ;
Bonaventure, Olivier .
CONEXT'17: PROCEEDINGS OF THE 2017 THE 13TH INTERNATIONAL CONFERENCE ON EMERGING NETWORKING EXPERIMENTS AND TECHNOLOGIES, 2017, :160-166
[9]  
Ferlin S, 2016, 2016 IFIP NETWORKING CONFERENCE (IFIP NETWORKING) AND WORKSHOPS, P431, DOI 10.1109/IFIPNetworking.2016.7497206
[10]   DeepCC: Multi-Agent Deep Reinforcement Learning Congestion Control for Multi-Path TCP Based on Self-Attention [J].
He, Bo ;
Wang, Jingyu ;
Qi, Qi ;
Sun, Haifeng ;
Liao, Jianxin ;
Du, Chunning ;
Yang, Xiang ;
Han, Zhu .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2021, 18 (04) :4770-4788