Enabling Sustainable Underwater IoT Networks With Energy Harvesting: A Decentralized Reinforcement Learning Approach

被引:41
作者
Han, Mengqi [1 ]
Duan, Jianli [2 ]
Khairy, Sami [1 ]
Cai, Lin X. [1 ]
机构
[1] IIT, Dept Elect & Comp Engn, Chicago, IL 60605 USA
[2] Dalian Maritime Univ, Coll Informat Sci & Technol, Dalian 116026, Peoples R China
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Protocols; Propagation delay; Energy harvesting; Throughput; Internet of Things; Optimization; Uncertainty; Fairness; multiagent reinforcement learning; throughput; tidal energy harvesting; underwater Internet-of-Things (IoT) network; MAC PROTOCOLS;
D O I
10.1109/JIOT.2020.2990733
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we study an energy sustainable Internet-of-Underwater Things (IoUT) network with tidal energy harvesting. Specifically, an analytical model is first developed to analyze the performance of the IoUT network, characterizing the stochastic nature of energy harvesting and traffic demands of IoUT nodes, and the salient features of acoustic communication channels. It is found that the spatial uncertainty resulting from underwater acoustic communication may cause a severe fairness issue. As such, an optimization problem is formulated to maximize the network throughput under fairness constraints, by tuning the random access parameters of each node. Given the global network information, including the number of nodes, energy harvesting rates, communication distances, etc., the optimization problem can be efficiently solved with the Branch and Bound (BnB) method. Considering a realistic network where the network information may not be available at the IoUT nodes, we further propose a multiagent reinforcement learning approach for each node to autonomously adapt the random access parameter based on the interactions with the dynamic network environment. The numerical results show that the proposed learning algorithm greatly improves the throughput performance compared with the existing solutions, and approaches the derived theoretical bound.
引用
收藏
页码:9953 / 9964
页数:12
相关论文
共 50 条
[31]   Deep Reinforcement Learning Based MAC Protocol for Underwater Acoustic Networks [J].
Ye, Xiaowen ;
Yu, Yiding ;
Fu, Liqun .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (05) :1625-1638
[32]   Reinforcement Learning Enabled Intelligent Energy Attack in Green IoT Networks [J].
Li, Long ;
Luo, Yu ;
Yang, Jing ;
Pu, Lina .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 :644-658
[33]   AN ACTOR-CRITIC REINFORCEMENT LEARNING APPROACH TO MINIMUM AGE OF INFORMATION SCHEDULING IN ENERGY HARVESTING NETWORKS [J].
Leng, Shiyang ;
Yener, Aylin .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :8128-8132
[34]   GoodPut, Collision Probability and Network Stability of Energy-Harvesting Cognitive-Radio IoT Networks [J].
Amini, Mohammad Reza ;
Baidas, Mohammed W. .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2020, 6 (04) :1283-1296
[35]   Navigating Boundaries in Quantifying Robustness: A DRL Expedition for Non-Linear Energy Harvesting IoT Networks [J].
Mohammed, Ali Asgher ;
Baig, Mirza Wasay ;
Sohail, Muhammad Abdullah ;
Ullah, Syed Asad ;
Jung, Haejoon ;
Hassan, Syed Ali .
IEEE COMMUNICATIONS LETTERS, 2024, 28 (10) :2447-2451
[36]   LANTERN: Learning-Based Routing Policy for Reliable Energy-Harvesting IoT Networks [J].
Taghizadeh, Hossein ;
Safaei, Bardia ;
Monazzah, Amir Mahdi Hosseini ;
Oustad, Elyas ;
Lalani, Sahar Rezagholi ;
Ejlali, Alireza .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (06) :6542-6554
[37]   Deep-Learning-Assisted Complete Targets Coverage in Energy-Harvesting IoT Networks [J].
Wang, Kunsheng ;
Yang, Changlin ;
Chin, Kwan-Wu ;
Xian, Jun .
IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (11) :17780-17790
[38]   Federated Reinforcement Learning for Decentralized Voltage Control in Distribution Networks [J].
Liu, Haotian ;
Wu, Wenchuan .
IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (05) :3840-3843
[39]   Decentralized Covert Routing in Heterogeneous Networks Using Reinforcement Learning [J].
Kong, Justin ;
Moore, Terrence J. ;
Dagefu, Fikadu T. .
IEEE COMMUNICATIONS LETTERS, 2024, 28 (11) :2683-2687
[40]   Deep Reinforcement Learning for Aerial Data Collection in Hybrid-Powered NOMA-IoT Networks [J].
Zhang, Zhanpeng ;
Xu, Chen ;
Li, Zewu ;
Zhao, Xiongwen ;
Wu, Runze .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (02) :1761-1774