Hierarchical Dynamic Power Management Using Model-Free Reinforcement Learning

被引:0
|
作者
Wang, Yanzhi [1 ]
Triki, Maryam
Lin, Xue [1 ]
Ammari, Ahmed C.
Pedram, Massoud [1 ]
机构
[1] Univ So Calif, Dept Elect Engn, Los Angeles, CA 90089 USA
来源
PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2013) | 2013年
关键词
Dynamic power management; reinforcement learning; Bayesian classification;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Model-free reinforcement learning (RL) has become a promising technigue for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected 110 devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.
引用
收藏
页码:170 / 177
页数:8
相关论文
共 50 条
  • [31] Reinforcement learning integration in dynamic power management
    Liu, Fa-Gui
    Lin, Jin-Biao
    Xing, Xiao-Yong
    Wang, Bin
    Lin, Jun
    Journal of Applied Sciences, 2013, 13 (14) : 2682 - 2687
  • [32] Hierarchical power management of a system with autonomously power-managed components using reinforcement learning
    Triki, M.
    Wang, Y.
    Ammari, A. C.
    Pedram, M.
    INTEGRATION-THE VLSI JOURNAL, 2015, 48 : 10 - 20
  • [33] Model-Based and Model-Free Replay Mechanisms for Reinforcement Learning in Neurorobotics
    Massi, Elisa
    Barthelemy, Jeanne
    Mailly, Juliane
    Dromnelle, Remi
    Canitrot, Julien
    Poniatowski, Esther
    Girard, Benoit
    Khamassi, Mehdi
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [34] Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning
    Swazinna, Phillip
    Udluft, Steffen
    Hein, Daniel
    Runkler, Thomas
    IFAC PAPERSONLINE, 2022, 55 (15): : 19 - 26
  • [35] Hybrid control for combining model-based and model-free reinforcement learning
    Pinosky, Allison
    Abraham, Ian
    Broad, Alexander
    Argall, Brenna
    Murphey, Todd D.
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2023, 42 (06): : 337 - 355
  • [36] Model-free reinforcement learning-based transient power control of vehicle fuel cell systems
    Zhang, Yahui
    Li, Ganxin
    Tian, Yang
    Wang, Zhong
    Liu, Jinfa
    Gao, Jinwu
    Jiao, Xiaohong
    Wen, Guilin
    APPLIED ENERGY, 2025, 388
  • [37] Model-Free Reinforcement Learning of Minimal-Cost Variance Control
    Jing, Gangshan
    Bai, He
    George, Jemin
    Chakrabortty, Aranya
    IEEE CONTROL SYSTEMS LETTERS, 2020, 4 (04): : 916 - 921
  • [38] Adaptive phase shift control of thermoacoustic combustion instabilities using model-free reinforcement learning
    Alhazmi, Khalid
    Sarathy, S. Mani
    COMBUSTION AND FLAME, 2023, 257
  • [39] Model-Free Decentralized Reinforcement Learning Control of Distributed Energy Resources
    Mukherjee, Sayak
    Bai, He
    Chakrabortty, Aranya
    2020 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2020,
  • [40] Model-free Reinforcement Learning of Semantic Communication by Stochastic Policy Gradient
    Beck, Edgar
    Bockelmann, Carsten
    Dekorsy, Armin
    2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024, 2024, : 367 - 373