Collaborative multi-agents in dynamic industrial internet of things using deep reinforcement learning

被引:3
作者
Raza, Ali [1 ]
Shah, Munam Ali [1 ]
Khattak, Hasan Ali [2 ]
Maple, Carsten [3 ]
Al-Turjman, Fadi [4 ]
Rauf, Hafiz Tayyab [5 ]
机构
[1] COMSATS Univ Islamabad, Dept Comp Sci, Islamabad 44000, Pakistan
[2] Natl Univ Sci & Technol NUST, Sch Elect Engn & Comp Sci, Islamabad 44500, Pakistan
[3] Univ Warwick, WMG, Secur Cyber Syst Res Grp, Coventry CV4 7AL, W Midlands, England
[4] Near East Univ, Res Ctr AI & IoT, Artificial Intelligence Dept, Mersin 10, Nicosia, Turkey
[5] Univ Bradford, Fac Engn & Informat, Dept Comp Sci, Bradford BD7 1AZ, W Yorkshire, England
基金
英国工程与自然科学研究理事会;
关键词
Deep reinforcement learning; Multi-agents; Behavior cloning; Dynamic environment; Scalability; OBSTACLE AVOIDANCE; ENVIRONMENT; NAVIGATION; SYSTEMS;
D O I
10.1007/s10668-021-01836-9
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Sustainable cities are envisioned to have economic and industrial steps toward reducing pollution. Many real-world applications such as autonomous vehicles, transportation, traffic signals, and industrial automation can now be trained using deep reinforcement learning (DRL) techniques. These applications are designed to take benefit of DRL in order to improve the monitoring as well as measurements in industrial internet of things for automation identification system. The complexity of these environments means that it is more appropriate to use multi-agent systems rather than a single-agent. However, in non-stationary environments multi-agent systems can suffer from increased number of observations, limiting the scalability of algorithms. This study proposes a model to tackle the problem of scalability in DRL algorithms in transportation domain. A partition-based approach is used in the proposed model to reduce the complexity of the environment. This partition-based approach helps agents to stay in their working area. This reduces the complexity of the learning environment and the number of observations for each agent. The proposed model uses generative adversarial imitation learning and behavior cloning, combined with a proximal policy optimization algorithm, for training multiple agents in a dynamic environment. We present a comparison of PPO, soft actor-critic, and our model in reward gathering. Our simulation results show that our model outperforms SAC and PPO in cumulative reward gathering and dramatically improved training multiple agents.
引用
收藏
页码:9481 / 9499
页数:19
相关论文
共 55 条
[51]   The autonomous navigation and obstacle avoidance for USVs with ANOA deep reinforcement learning method [J].
Wu, Xing ;
Chen, Haolei ;
Chen, Changgu ;
Zhong, Mingyu ;
Xie, Shaorong ;
Guo, Yike ;
Fujita, Hamido .
KNOWLEDGE-BASED SYSTEMS, 2020, 196
[52]   Safety robustness of reinforcement learning policies: A view from robust control [J].
Xiong, Hao ;
Diao, Xiumin .
NEUROCOMPUTING, 2021, 422 :12-21
[53]  
Zaman K., 2020, 2020 INT C UK CHIN, P1
[54]   Scalable Deep Multi-Agent Reinforcement Learning via Observation Embedding and Parameter Noise [J].
Zhang, Jian ;
Pan, Yaozong ;
Yang, Haitao ;
Fang, Yuqiang .
IEEE ACCESS, 2019, 7 :54615-54622
[55]  
Zhao Dongbin, 2016, P 2016 IEEE S SERIES