Controlling Action Space of Reinforcement-Learning-Based Energy Management in Batteryless Applications

被引:4
作者
Ahn, Junick [1 ]
Kim, Daeyong [1 ]
Ha, Rhan [1 ]
Cha, Hojung [1 ]
机构
[1] Yonsei Univ, Dept Comp Sci, Seoul 03722, South Korea
关键词
Task analysis; Reinforcement learning; Energy storage; Control systems; Aerospace electronics; Energy management; Sensors; Embedded software; energy harvesting; energy management; wireless sensor networks;
D O I
10.1109/JIOT.2023.3234905
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Duty cycle management is critical for the energy-neutral operation of batteryless devices. Many efforts have been made to develop an effective duty cycling method, including machine-learning-based approaches, but existing methods can barely handle the dynamic harvesting environments of batteryless devices. Specifically, most machine-learning-based methods require the harvesting patterns to be collected in advance, as well as manual configuration of the duty-cycle boundaries. In this article, we propose a configuration-free duty cycling scheme for batteryless devices, called CTRL, with which energy harvesting nodes tune the duty cycle themselves adapting to the surrounding environment without user intervention. This approach combines reinforcement learning (RL) with a control system to allow the learning algorithm to explore all possible search space automatically. The learning algorithm sets the target State of Charge (SoC) of the energy storage, instead of explicitly setting the target task frequency at a given time. The control system then satisfies the target SoC by controlling the duty cycle. An evaluation based on the real implementation of the system using publicly available trace data shows that CTRL outperforms state-of-the-art approaches, resulting in 40% less frequent power failures in energy-scarce environments while achieving more than ten times the task frequency in energy-rich environments.
引用
收藏
页码:9928 / 9941
页数:14
相关论文
共 37 条
  • [1] State-of-Charge Estimation of Supercapacitors in Transiently-Powered Sensor Nodes
    Ahn, JunIck
    Kim, Daeyong
    Ha, Rhan
    Cha, Hojung
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (02) : 225 - 237
  • [2] [Anonymous], 2017, PROX POL OPT
  • [3] [Anonymous], DATA SHEET MSP430FR5
  • [4] [Anonymous], TSL2560
  • [5] RLMan: An Energy Manager Based on Reinforcement Learning for Energy Harvesting Wireless Sensor Networks
    Aoudia, Faycal Ait
    Gautier, Matthieu
    Berder, Olivier
    [J]. IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2018, 2 (02): : 408 - 417
  • [6] Fuzzy Power Management for Energy Harvesting Wireless Sensor Nodes
    Aoudia, Faycal Ait
    Gautier, Matthieu
    Berder, Olivier
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2016, : 657 - 662
  • [7] Hibernus: Sustaining Computation During Intermittent Supply for Energy-Harvesting Systems
    Balsamo, Domenico
    Weddell, Alex S.
    Merrett, Geoff V.
    Al-Hashimi, Bashir M.
    Brunelli, Davide
    Benini, Luca
    [J]. IEEE EMBEDDED SYSTEMS LETTERS, 2015, 7 (01) : 15 - 18
  • [8] Buchli Bernhard, 2014, Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems, P31, DOI [DOI 10.1145/2668332.2668333, 10.1145/2668332.2668333]
  • [9] Colin A, 2018, ACM SIGPLAN NOTICES, V53, P767, DOI [10.1145/3296957.3173210, 10.1145/3173162.3173210]
  • [10] Chain: Tasks and Channels for Reliable Intermittent Programs
    Colin, Alexei
    Lucia, Brandon
    [J]. ACM SIGPLAN NOTICES, 2016, 51 (10) : 514 - 530