Collaborative multi-agents in dynamic industrial internet of things using deep reinforcement learning

被引:3
作者
Raza, Ali [1 ]
Shah, Munam Ali [1 ]
Khattak, Hasan Ali [2 ]
Maple, Carsten [3 ]
Al-Turjman, Fadi [4 ]
Rauf, Hafiz Tayyab [5 ]
机构
[1] COMSATS Univ Islamabad, Dept Comp Sci, Islamabad 44000, Pakistan
[2] Natl Univ Sci & Technol NUST, Sch Elect Engn & Comp Sci, Islamabad 44500, Pakistan
[3] Univ Warwick, WMG, Secur Cyber Syst Res Grp, Coventry CV4 7AL, W Midlands, England
[4] Near East Univ, Res Ctr AI & IoT, Artificial Intelligence Dept, Mersin 10, Nicosia, Turkey
[5] Univ Bradford, Fac Engn & Informat, Dept Comp Sci, Bradford BD7 1AZ, W Yorkshire, England
基金
英国工程与自然科学研究理事会;
关键词
Deep reinforcement learning; Multi-agents; Behavior cloning; Dynamic environment; Scalability; OBSTACLE AVOIDANCE; ENVIRONMENT; NAVIGATION; SYSTEMS;
D O I
10.1007/s10668-021-01836-9
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Sustainable cities are envisioned to have economic and industrial steps toward reducing pollution. Many real-world applications such as autonomous vehicles, transportation, traffic signals, and industrial automation can now be trained using deep reinforcement learning (DRL) techniques. These applications are designed to take benefit of DRL in order to improve the monitoring as well as measurements in industrial internet of things for automation identification system. The complexity of these environments means that it is more appropriate to use multi-agent systems rather than a single-agent. However, in non-stationary environments multi-agent systems can suffer from increased number of observations, limiting the scalability of algorithms. This study proposes a model to tackle the problem of scalability in DRL algorithms in transportation domain. A partition-based approach is used in the proposed model to reduce the complexity of the environment. This partition-based approach helps agents to stay in their working area. This reduces the complexity of the learning environment and the number of observations for each agent. The proposed model uses generative adversarial imitation learning and behavior cloning, combined with a proximal policy optimization algorithm, for training multiple agents in a dynamic environment. We present a comparison of PPO, soft actor-critic, and our model in reward gathering. Our simulation results show that our model outperforms SAC and PPO in cumulative reward gathering and dramatically improved training multiple agents.
引用
收藏
页码:9481 / 9499
页数:19
相关论文
共 55 条
[1]   Leveraging Traffic Condition using IoT for Improving Smart City Street Lights [J].
Arshad, Syeda Roushan ;
Saeed, Aman ;
Akre, Vishwesh ;
Khattak, Hasan Ali ;
Ahmed, Sheeraz ;
Khan, Zia Ullah ;
Khan, Zahoor Ali ;
Nawaz, Asif .
2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION, NETWORKS AND SATELLITE (COMNETSAT), 2020, :92-96
[2]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[3]  
Awan K. A., 2021, EdgeTrust-A Lightweight Data-centric Trust Management Approach for Green Internet of Edge Things, DOI DOI 10.21203/RS.3.RS-453986/V1
[4]  
Bicocchi N., 2019, WOA, P29
[5]   THE VECTOR FIELD HISTOGRAM - FAST OBSTACLE AVOIDANCE FOR MOBILE ROBOTS [J].
BORENSTEIN, J ;
KOREN, Y .
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1991, 7 (03) :278-288
[6]   OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields [J].
Cao, Zhe ;
Hidalgo, Gines ;
Simon, Tomas ;
Wei, Shih-En ;
Sheikh, Yaser .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (01) :172-186
[7]   Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control [J].
Chu, Tianshu ;
Wang, Jie ;
Codeca, Lara ;
Li, Zhaojian .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (03) :1086-1095
[8]   Coordinated behavior of cooperative agents using deep reinforcement learning [J].
Diallo, Elhadji Amadou Oury ;
Sugiyama, Ayumi ;
Sugawara, Toshiharu .
NEUROCOMPUTING, 2020, 396 :230-240
[9]   Simultaneous localization and mapping: Part I [J].
Durrant-Whyte, Hugh ;
Bailey, Tim .
IEEE ROBOTICS & AUTOMATION MAGAZINE, 2006, 13 (02) :99-108
[10]  
Foerster Jakob N, 2017, Counterfactual Multi-Agent Policy Gradients