Energy-Efficient Power Control for Energy Harvesting Cooperative Relay Sensor Network

被引:0
作者
Tian, Lin [1 ]
Dai, Tuwang [2 ]
Zhu, Yifeng [1 ]
Lu, Guosheng [1 ]
机构
[1] China Southern Power Grid Extra High Voltage Tran, Guangzhou, Guangdong, Peoples R China
[2] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou, Guangdong, Peoples R China
来源
2019 COMPUTING, COMMUNICATIONS AND IOT APPLICATIONS (COMCOMAP) | 2019年
关键词
energy efficiency; reinforcement learning; energy harvesting; linear function approximation; MANAGEMENT;
D O I
10.1109/comcomap46287.2019.9018791
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In this paper, we consider the power control problem in cooperative relay sensor network, in which the nodes harvest energy from environment to transmit data. The goal of this paper is to optimize the energy efficiency of the network subject to the harvested energy and transmitted data. Due to the stochastic properties of channel state information (CSI) and energy harvesting process, we formulate the energy efficiency problem as a Markov decision process, and the model-free reinforcement learning algorithm SARSA is proposed to learning the power control policy. To address the problem with continuous state space, we propose a set of feature functions to approximate action value function. Numerical results verify that the proposed algorithm is outperform than other two algorithms.
引用
收藏
页码:305 / 309
页数:5
相关论文
共 12 条
[1]  
Bhardwaj M, 2002, IEEE INFOCOM SER, P1587, DOI 10.1109/INFCOM.2002.1019410
[2]   Harvesting aware power management for sensor networks [J].
Kansal, Aman ;
Hsu, Jason ;
Srivastava, Mani ;
Raghunathan, Vijay .
43RD DESIGN AUTOMATION CONFERENCE, PROCEEDINGS 2006, 2006, :651-+
[3]   Energy Harvesting Two-Hop Communication Networks [J].
Orhan, Oner ;
Erkip, Elza .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2015, 33 (12) :2658-2670
[4]   Reinforcement Learning for Energy Harvesting Point-to-Point Communications [J].
Ortiz, Andrea ;
Al-Shatri, Hussein ;
Li, Xiang ;
Weber, Tobias ;
Klein, Anja .
2016 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2016,
[5]   Reinforcement Learning for Energy Harvesting Decode-and-Forward Two-Hop Communications [J].
Ortiz, Andrea ;
Al-Shatri, Hussein ;
Li, Xiang ;
Weber, Tobias ;
Klein, Anja .
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2017, 1 (03) :309-319
[6]   Transmission with Energy Harvesting Nodes in Fading Wireless Channels: Optimal Policies [J].
Ozel, Omur ;
Tutuncuoglu, Kaya ;
Yang, Jing ;
Ulukus, Sennur ;
Yener, Aylin .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2011, 29 (08) :1732-1743
[7]   Energy scavenging for mobile and wireless electronics [J].
Paradiso, JA ;
Starner, T .
IEEE PERVASIVE COMPUTING, 2005, 4 (01) :18-27
[8]   Q-Learning Based Energy Management Policies for a Single Sensor Node with Finite Buffer [J].
Prabuchandran, K. J. ;
Meena, Sunil Kumar ;
Bhatnagar, Shalabh .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2013, 2 (01) :82-85
[9]  
Qureshi S., 2013, 2013 IEEE 9 INT C EM, P1
[10]   Energy Harvesting Sensor Nodes: Survey and Implications [J].
Sudevalayam, Sujesha ;
Kulkarni, Purushottam .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2011, 13 (03) :443-461