Optimization vs. Reinforcement Learning for Wirelessly Powered Sensor Networks

被引:0
作者
Ozcelikkale, Ayca [1 ]
Koseoglu, Mehmet [2 ]
Srivastava, Mani [2 ]
机构
[1] Uppsala Univ, Signals & Syst, Uppsala, Sweden
[2] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA 90024 USA
来源
2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC) | 2018年
基金
瑞典研究理事会;
关键词
WAVE-FORM DESIGN; RESOURCE-ALLOCATION; INFORMATION;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We consider a sensing application where the sensor nodes are wirelessly powered by an energy beacon. We focus on the problem of jointly optimizing the energy allocation of the energy beacon to different sensors and the data transmission powers of the sensors in order to minimize the field reconstruction error at the sink. In contrast to the standard ideal linear energy harvesting (EH) model, we consider practical non-linear EH models. We investigate this problem under two different frameworks: i) an optimization approach where the energy beacon knows the utility function of the nodes, channel state information and the energy harvesting characteristics of the devices; hence optimal power allocation strategies can be designed using an optimization problem and ii) a learning approach where the energy beacon decides on its strategies adaptively with battery level information and feedback on the utility function. Our results illustrate that deep reinforcement learning approach can obtain the same error levels with the optimization approach and provides a promising alternative to the optimization framework.
引用
收藏
页码:286 / 290
页数:5
相关论文
共 50 条
  • [21] Sum-Rate Maximization Methods for Wirelessly Powered Communication Networks in Interference Channels
    Kim, Hanjin
    Lee, Hoon
    Duan, Lingjie
    Lee, Inkyu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2018, 17 (10) : 6464 - 6474
  • [22] Energy Aware Trajectory Optimization of Solar Powered AUVs for Optical Underwater Sensor Networks
    Mahmoodi, Khadijeh Ali
    Uysal, Murat
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (12) : 8258 - 8269
  • [23] Deep Reinforcement Learning for Energy-Efficient Federated Learning in UAV-Enabled Wireless Powered Networks
    Quang Vinh Do
    Quoc-Viet Pham
    Hwang, Won-Joo
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (01) : 99 - 103
  • [24] Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between Convergence and Power Transfer
    Zeng, Qunsong
    Du, Yuqing
    Huang, Kaibin
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (01) : 680 - 695
  • [25] Throughput Maximization for RF Powered Cognitive NOMA Networks With Backscatter Communication by Deep Reinforcement Learning
    Guo, Shaoai
    Zhao, Xiaohui
    Zhang, Wei
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (07) : 7111 - 7126
  • [26] Multi-Objective Optimization for UAV-Enabled Wireless Powered IoT Networks: An LSTM-Based Deep Reinforcement Learning Approach
    Zhang, Shanxin
    Cao, Runyu
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (12) : 3019 - 3023
  • [27] Smart Energy Borrowing and Relaying in Wireless-Powered Networks: A Deep Reinforcement Learning Approach
    Mondal, Abhishek
    Alam, Md. Sarfraz
    Mishra, Deepak
    Prasad, Ganesh
    ENERGIES, 2023, 16 (21)
  • [28] Improving Reinforcement Learning Algorithms for Dynamic Spectrum Allocation in Cognitive Sensor Networks
    Faganello, Leonardo Roveda
    Kunst, Rafael
    Both, Cristiano Bonato
    Granville, Lisandro Zambenedetti
    Rochol, Juergen
    2013 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2013, : 35 - 40
  • [29] Status update control based on reinforcement learning in energy harvesting sensor networks
    Han, Zhihui
    Gong, Jie
    FRONTIERS IN COMMUNICATIONS AND NETWORKS, 2022, 3
  • [30] A Reinforcement Learning QoI/QoS-Aware Approach in Acoustic Sensor Networks
    Afifi, Haitham
    Ramaswamy, Arunselvan
    Karl, Holger
    2021 IEEE 18TH ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), 2021,