Enabling Sustainable Underwater IoT Networks With Energy Harvesting: A Decentralized Reinforcement Learning Approach

被引:41
作者
Han, Mengqi [1 ]
Duan, Jianli [2 ]
Khairy, Sami [1 ]
Cai, Lin X. [1 ]
机构
[1] IIT, Dept Elect & Comp Engn, Chicago, IL 60605 USA
[2] Dalian Maritime Univ, Coll Informat Sci & Technol, Dalian 116026, Peoples R China
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Protocols; Propagation delay; Energy harvesting; Throughput; Internet of Things; Optimization; Uncertainty; Fairness; multiagent reinforcement learning; throughput; tidal energy harvesting; underwater Internet-of-Things (IoT) network; MAC PROTOCOLS;
D O I
10.1109/JIOT.2020.2990733
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we study an energy sustainable Internet-of-Underwater Things (IoUT) network with tidal energy harvesting. Specifically, an analytical model is first developed to analyze the performance of the IoUT network, characterizing the stochastic nature of energy harvesting and traffic demands of IoUT nodes, and the salient features of acoustic communication channels. It is found that the spatial uncertainty resulting from underwater acoustic communication may cause a severe fairness issue. As such, an optimization problem is formulated to maximize the network throughput under fairness constraints, by tuning the random access parameters of each node. Given the global network information, including the number of nodes, energy harvesting rates, communication distances, etc., the optimization problem can be efficiently solved with the Branch and Bound (BnB) method. Considering a realistic network where the network information may not be available at the IoUT nodes, we further propose a multiagent reinforcement learning approach for each node to autonomously adapt the random access parameter based on the interactions with the dynamic network environment. The numerical results show that the proposed learning algorithm greatly improves the throughput performance compared with the existing solutions, and approaches the derived theoretical bound.
引用
收藏
页码:9953 / 9964
页数:12
相关论文
共 50 条
[41]   RLMan: An Energy Manager Based on Reinforcement Learning for Energy Harvesting Wireless Sensor Networks [J].
Aoudia, Faycal Ait ;
Gautier, Matthieu ;
Berder, Olivier .
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2018, 2 (02) :408-417
[42]   Energy-Efficient Multidimensional Trajectory of UAV-Aided IoT Networks With Reinforcement Learning [J].
Silvirianti ;
Shin, Soo Young .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19) :19214-19226
[43]   Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in Edge IoT [J].
Lei, Wanlu ;
Ye, Yu ;
Xiao, Ming ;
Skoglund, Mikael ;
Han, Zhu .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (22) :22958-22971
[44]   Comparing Backscatter Communication and Energy Harvesting in Massive IoT Networks [J].
Du, Rong ;
Timoudas, Thomas Ohlson ;
Fischione, Carlo .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (01) :429-443
[45]   An Energy Harvesting Algorithm for UAV-Assisted TinyML Consumer Electronic in Low-Power IoT Networks [J].
Huang, Jie ;
Yu, Tao ;
Chakraborty, Chinmay ;
Yang, Fan ;
Lai, Xianzhi ;
Alharbi, Abdullah ;
Yu, Keping .
IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (04) :7346-7356
[46]   Random-Access NOMA in URLL Energy-Harvesting IoT Networks With Short Packet and Diversity Transmissions [J].
Amini, Mohammad Reza ;
Baidas, Mohammed W. .
IEEE ACCESS, 2020, 8 :220734-220754
[47]   Distributed Power Control for Large Energy Harvesting Networks: A Multi-Agent Deep Reinforcement Learning Approach [J].
Sharma, Mohit K. ;
Zappone, Alessio ;
Assaad, Mohamad ;
Debbah, Merouane ;
Vassilaras, Spyridon .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2019, 5 (04) :1140-1154
[48]   UAV-Enabled Mobile RAN and RF-Energy Transfer Protocol for Enabling Sustainable IoT in Energy-Constrained Networks [J].
Jaiswal, Ankur ;
Shivateja, Salla ;
Hazra, Abhishek ;
Mazumdar, Nabajyoti ;
Singh, Jagpreet ;
Menon, Varun G. .
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2024, 8 (03) :1118-1127
[49]   An Adaptive Asynchronous Wake-Up Scheme for Underwater Acoustic Sensor Networks Using Deep Reinforcement Learning [J].
Su, Ruoyu ;
Gong, Zijun ;
Zhang, Dengyin ;
Li, Cheng ;
Chen, Yuanzhu ;
Venkatesan, Ramachandran .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (02) :1851-1865
[50]   Reinforcement Learning Framework for Delay Sensitive Energy Harvesting Wireless Sensor Networks [J].
Al-Tous, Hanan ;
Barhumi, Imad .
IEEE SENSORS JOURNAL, 2021, 21 (05) :7103-7113