NetAI-Gym: Customized Environment for Network to Evaluate Agent Algorithm using Reinforcement Learning in Open-AI Gym Platform

被引:0
作者
Vidyadhar, Varshini [1 ]
Nagaraj, R. [2 ]
Ashoka, D., V [3 ]
机构
[1] Bangalore Inst Technol, Dept Comp Sci & Engn, Bangalore, Karnataka, India
[2] Bangalore Inst Technol, Dept Informat Sci & Engn, Bangalore, Karnataka, India
[3] JSS Acad Tech Educt, Dept Informat Sci & Engn, Bangalore, Karnataka, India
关键词
Open-AI Gym; network; environment; agent; reinforcement learning; ROUTING PROTOCOL;
D O I
10.14569/IJACSA.2021.0120423
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The growing size of the network imposes computational overhead during network route establishment using conventional approaches of the routing protocol. The alternate approach in contrast to the route table updating mechanism is the rule-based method, but this also provides a limited scope in the dynamic networks. Therefore, reinforcement learning promises a better way of finding the route, but it requires an evaluation platform to build a model synchronization between route and agent. Unfortunately, the de-facto platform for agent evaluation, namely Open-AI Gym, does not provide a suitable networking environment. Therefore, this paper aims to propose a networking environment as a novel contribution by designing a suitable customized environment for a network synchronically with Open-AI Gym. The successful deployment of the proposed network environment: NetAI-Gym provides a functional and practical result that can be used further to develop routing mechanisms based on Q-learning. The validation of the proposed NetAI-Gym is carried out with different nodes in the network regarding Episodes Vs. Reward. The experimental outcome justifies the validity of the proposed NetAI-Gym that it is suitable for solving network-related problems.
引用
收藏
页码:169 / 176
页数:8
相关论文
共 25 条
  • [1] Al Aghbari Z., 2019, WIREL PERSONAL COMMU, P1
  • [2] Adaptive Opportunistic Routing for Wireless Ad Hoc Networks
    Bhorkar, Abhijeet A.
    Naghshvar, Mohammad
    Javidi, Tara
    Rao, Bhaskar D.
    [J]. IEEE-ACM TRANSACTIONS ON NETWORKING, 2012, 20 (01) : 243 - 256
  • [3] CARMA: Channel-Aware Reinforcement Learning-Based Multi-Path Adaptive Routing for Underwater Wireless Sensor Networks
    Di Valerio, Valerio
    Lo Presti, Francesco
    Petrioli, Chiara
    Picari, Luigi
    Spaccini, Daniele
    Basagni, Stefano
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (11) : 2634 - 2647
  • [4] Deep Reinforcement Learning for Router Selection in Network With Heavy Traffic
    Ding, Ruijin
    Xu, Yadong
    Gao, Feifei
    Shen, Xuemin
    Wu, Wen
    [J]. IEEE ACCESS, 2019, 7 : 37109 - 37120
  • [5] Using feedback in collaborative reinforcement learning to adaptively optimize MANET routing
    Dowling, J
    Curran, E
    Cunningham, R
    Cahill, V
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2005, 35 (03): : 360 - 372
  • [6] Recent advances in selection hyper-heuristics
    Drake, John H.
    Kheiri, Ahmed
    Ozcan, Ender
    Burke, Edmund K.
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2020, 285 (02) : 405 - 428
  • [7] PDTR: Probabilistic and Deterministic Tree-based Routing for Wireless Sensor Networks
    Ghoul, Rafia
    He, Jing
    Djaidja, Sana
    Al-qaness, Mohammed A. A.
    Kim, Sunghwan
    [J]. SENSORS, 2020, 20 (06)
  • [8] Routing Selection With Reinforcement Learning for Energy Harvesting Multi-Hop CRN
    He, Xiaoli
    Jiang, Hong
    Song, Yu
    He, Chunlin
    Xiao, He
    [J]. IEEE ACCESS, 2019, 7 : 54435 - 54448
  • [9] Hill A., 2018, Stable baselines
  • [10] Hüttenrauch M, 2019, J MACH LEARN RES, V20