Dynamic Pricing and Energy Consumption Scheduling With Reinforcement Learning

被引:203
|
作者
Kim, Byung-Gook [1 ]
Zhang, Yu [2 ]
van der Schaar, Mihaela [3 ]
Lee, Jang-Won [4 ]
机构
[1] Samsung Elect, Networks Business Div, Suwon 433742, South Korea
[2] Microsoft, Online Serv Div, Sunnyvale, CA 94085 USA
[3] Univ Calif Los Angeles, Dept Elect Engn, Los Angeles, CA 90095 USA
[4] Yonsei Univ, Dept Elect & Elect Engn, Seoul 03722, South Korea
基金
美国国家科学基金会; 新加坡国家研究基金会;
关键词
Smart grid; microgrid; dynamic pricing; load scheduling; demand response; electricity market; Markov decision process; reinforcement learning; DEMAND RESPONSE MANAGEMENT; ELECTRIC VEHICLES; SIDE MANAGEMENT; SMART DEVICES; UTILITY; GRIDS; DISPATCH; MARKETS;
D O I
10.1109/TSG.2015.2495145
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we study a dynamic pricing and energy consumption scheduling problem in the microgrid where the service provider acts as a broker between the utility company and customers by purchasing electric energy from the utility company and selling it to the customers. For the service provider, even though dynamic pricing is an efficient tool to manage the microgrid, the implementation of dynamic pricing is highly challenging due to the lack of the customer-side information and the various types of uncertainties in the microgrid. Similarly, the customers also face challenges in scheduling their energy consumption due to the uncertainty of the retail electricity price. In order to overcome the challenges of implementing dynamic pricing and energy consumption scheduling, we develop reinforcement learning algorithms that allow each of the service provider and the customers to learn its strategy without a priori information about the microgrid. Through numerical results, we show that the proposed reinforcement learning-based dynamic pricing algorithm can effectively work without a priori information about the system dynamics and the proposed energy consumption scheduling algorithm further reduces the system cost thanks to the learning capability of each customer.
引用
收藏
页码:2187 / 2198
页数:12
相关论文
共 50 条
  • [41] Sellers' Pricing By Bayesian Reinforcement Learning
    Han, Wei
    2009 INTERNATIONAL CONFERENCE ON E-BUSINESS AND INFORMATION SYSTEM SECURITY, VOLS 1 AND 2, 2009, : 1276 - 1280
  • [42] A Reinforcement Learning Algorithm for Dynamic Job Shop Scheduling
    Alcamo, Laura
    Bruno, Giulia
    Giovenali, Niccolo
    INNOVATIVE INTELLIGENT INDUSTRIAL PRODUCTION AND LOGISTICS, IN4PL 2024, PT I, 2025, 2372 : 350 - 366
  • [43] Geometric deep reinforcement learning for dynamic DAG scheduling
    Grinsztajn, Nathan
    Beaumont, Olivier
    Jeannot, Emmanuel
    Preux, Philippe
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 258 - 265
  • [44] Offline Deep Reinforcement Learning for Dynamic Pricing of Consumer Credit
    Khraishi, Raad
    Okhrati, Ramin
    3RD ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, ICAIF 2022, 2022, : 325 - 333
  • [45] Reinforcement learning in deregulated energy market: A comprehensive review
    Zhu, Ziqing
    Hu, Ze
    Chan, Ka Wing
    Bu, Siqi
    Zhou, Bin
    Xia, Shiwei
    APPLIED ENERGY, 2023, 329
  • [46] Transferable Adversarial Attack Against Deep Reinforcement Learning-Based Smart Grid Dynamic Pricing System
    Ren, Yan
    Zhang, Heng
    Yang, Wen
    Li, Ming
    Zhang, Jian
    Li, Hongran
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (06) : 9015 - 9025
  • [47] Online scheduling of plug-in vehicles in dynamic pricing schemes
    Arif, A. I.
    Babar, M.
    Ahamed, T. P. Imthias
    Al-Ammar, E. A.
    Nguyen, P. H.
    Kamphuis, I. G. Rene
    Malik, N. H.
    SUSTAINABLE ENERGY GRIDS & NETWORKS, 2016, 7 : 25 - 36
  • [48] Dynamic Pricing Strategy of Electric Vehicle Aggregators Based on DDPG Reinforcement Learning Algorithm
    Liu, Dunnan
    Wang, Weiye
    Wang, Lingxiang
    Jia, Heping
    Shi, Mengshu
    IEEE ACCESS, 2021, 9 : 21556 - 21566
  • [49] Dynamic Reinforcement Learning based Scheduling for Energy-Efficient Edge-Enabled LoRaWAN
    Mhatre, Jui
    Lee, Ahyoung
    2022 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE, IPCCC, 2022,
  • [50] A Comparison of Reinforcement Learning Based Approaches to Appliance Scheduling
    Chauhan, Namit
    Choudhary, Neha
    George, Koshy
    PROCEEDINGS OF THE 2016 2ND INTERNATIONAL CONFERENCE ON CONTEMPORARY COMPUTING AND INFORMATICS (IC3I), 2016, : 253 - 258