Power Control in Internet of Drones by Deep Reinforcement Learning

被引:6
作者
Yao, Jingjing [1 ]
Ansari, Nirwan [1 ]
机构
[1] New Jersey Inst Technol, Helen & John C Hartmann Dept Elect & Comp Engn, Adv Networking Lab, Newark, NJ 07102 USA
来源
ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC) | 2020年
基金
美国国家科学基金会;
关键词
Power control; internet of drones (IoD); energy harvesting; deep reinforcement learning; actor-critic; quality of service (QoS); ENERGY; ALLOCATION;
D O I
10.1109/icc40277.2020.9148749
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Internet of Drones (IoD) employs drones as the internet of things (IoT) devices to provision applications such as traffic surveillance and object tracking. Data collection service is a typical application where multiple drones are deployed to collect information from the ground and send them to the IoT gateway for further processing. The performance of IoD networks is constrained by drones' battery capacities, and hence we utilize both energy harvesting technologies and power control to address this limitation. Specifically, we optimize drones' wireless transmission power at each time epoch in energy harvesting aided time-varying IoD networks for the data collection service with the objective to minimize the average system energy cost. We then formulate a Markov Decision Process (MDP) model to characterize the power control process in dynamic IoD networks, which is then solved by our proposed model-free deep actor-critic reinforcement learning algorithm. The performance of our algorithm is demonstrated via extensive simulations.
引用
收藏
页数:6
相关论文
共 28 条
[1]  
Al-Hourani A, 2014, IEEE GLOB COMM CONF, P2898, DOI 10.1109/GLOCOM.2014.7037248
[2]  
[Anonymous], 2016, PROC INT C LEARNING
[3]   SOARNET [J].
Ansari, Nirwan ;
Fan, Qiang ;
Sun, Xiang ;
Zhang, Liang .
IEEE WIRELESS COMMUNICATIONS, 2019, 26 (06) :37-43
[4]   Mobile Edge Computing Empowers Internet of Things [J].
Ansari, Nirwan ;
Sun, Xiang .
IEICE TRANSACTIONS ON COMMUNICATIONS, 2018, E101B (03) :604-619
[5]   HOW MUCH ENERGY IS NEEDED TO RUN A WIRELESS NETWORK? [J].
Auer, Gunther ;
Giannini, Vito ;
Desset, Claude ;
Godor, Istvan ;
Skillermark, Per ;
Olsson, Magnus ;
Imran, Muhammad Ali ;
Sabella, Dario ;
Gonzalez, Manuel J. ;
Blume, Oliver ;
Fehske, Albrecht .
IEEE WIRELESS COMMUNICATIONS, 2011, 18 (05) :40-49
[6]   Energy-Efficiency Oriented Traffic Offloading in Wireless Networks: A Brief Survey and a Learning Approach for Heterogeneous Cellular Networks [J].
Chen, Xianfu ;
Wu, Jinsong ;
Cai, Yueming ;
Zhang, Honggang ;
Chen, Tao .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2015, 33 (04) :627-640
[7]   Privacy Protection for Internet of Drones: A Network Coding Approach [J].
Chen, Yu-Jia ;
Wang, Li-Chun .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (02) :1719-1730
[8]   Internet of Drones [J].
Gharibi, Mirmojtaba ;
Boutaba, Raouf ;
Waslander, Steven L. .
IEEE ACCESS, 2016, 4 :1148-1162
[9]   A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients [J].
Grondman, Ivo ;
Busoniu, Lucian ;
Lopes, Gabriel A. D. ;
Babuska, Robert .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2012, 42 (06) :1291-1307
[10]   Optimal and Autonomous Control Using Reinforcement Learning: A Survey [J].
Kiumarsi, Bahare ;
Vamvoudakis, Kyriakos G. ;
Modares, Hamidreza ;
Lewis, Frank L. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (06) :2042-2062