Optimization vs. Reinforcement Learning for Wirelessly Powered Sensor Networks

被引:0
|
作者
Ozcelikkale, Ayca [1 ]
Koseoglu, Mehmet [2 ]
Srivastava, Mani [2 ]
机构
[1] Uppsala Univ, Signals & Syst, Uppsala, Sweden
[2] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA 90024 USA
来源
2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC) | 2018年
基金
瑞典研究理事会;
关键词
WAVE-FORM DESIGN; RESOURCE-ALLOCATION; INFORMATION;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We consider a sensing application where the sensor nodes are wirelessly powered by an energy beacon. We focus on the problem of jointly optimizing the energy allocation of the energy beacon to different sensors and the data transmission powers of the sensors in order to minimize the field reconstruction error at the sink. In contrast to the standard ideal linear energy harvesting (EH) model, we consider practical non-linear EH models. We investigate this problem under two different frameworks: i) an optimization approach where the energy beacon knows the utility function of the nodes, channel state information and the energy harvesting characteristics of the devices; hence optimal power allocation strategies can be designed using an optimization problem and ii) a learning approach where the energy beacon decides on its strategies adaptively with battery level information and feedback on the utility function. Our results illustrate that deep reinforcement learning approach can obtain the same error levels with the optimization approach and provides a promising alternative to the optimization framework.
引用
收藏
页码:286 / 290
页数:5
相关论文
共 50 条
  • [1] A Cooperative SWIPT Scheme for Wirelessly Powered Sensor Networks
    Liu, Tao
    Wang, Xiaodong
    Zheng, Le
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2017, 65 (06) : 2740 - 2752
  • [2] Wirelessly-Powered Sensor Networks: Power Allocation for Channel Estimation and Energy Beamforming
    Du, Rong
    Shokri-Ghadikolaei, Hossein
    Fischione, Carlo
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (05) : 2987 - 3002
  • [3] Wireless Power Transfer in Wirelessly Powered Sensor Networks: A Review of Recent Progress
    Huda, S. M. Asiful
    Arafat, Muhammad Yeasir
    Moh, Sangman
    SENSORS, 2022, 22 (08)
  • [4] Multi-Agent Deep Reinforcement Learning for Distributed Resource Management in Wirelessly Powered Communication Networks
    Hwang, Sangwon
    Kim, Hanjin
    Lee, Hoon
    Lee, Inkyu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (11) : 14055 - 14060
  • [5] Resource Allocation Optimization for Secure Multidevice Wirelessly Powered Backscatter Communication With Artificial Noise
    Wang, Pu
    Yan, Zheng
    Wang, Ning
    Zeng, Kai
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (09) : 7794 - 7809
  • [6] DEEP REINFORCEMENT LEARNING BASED ENERGY BEAMFORMING FOR POWERING SENSOR NETWORKS
    Ozcelikkale, Ayca
    Koseoglu, Mehmet
    Srivastava, Mani
    Ahlen, Anders
    2019 IEEE 29TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2019,
  • [7] Max-Min Throughput Optimization in FDD Multiantenna Wirelessly Powered IoT Networks
    Ahmadian, Arman
    Shin, Wonjae
    Park, Hyuncheol
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (07) : 5866 - 5880
  • [8] Wirelessly Powered Cell-Free IoT: Analysis and Optimization
    Wang, Xinhua
    Ashikhmin, Alexei
    Wang, Xiaodong
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (09) : 8384 - 8396
  • [9] The Energized Point Process as a Model for Wirelessly Powered Communication Networks
    Deng, Na
    Haenggi, Martin
    IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2020, 4 (03): : 832 - 844
  • [10] Multiagent Deep Reinforcement Learning for Wireless-Powered UAV Networks
    Oubbati, Omar Sami
    Lakas, Abderrahmane
    Guizani, Mohsen
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17): : 16044 - 16059