Short-Term Electricity Futures Investment Strategies for Power Producers Based on Multi-Agent Deep Reinforcement Learning

被引:0
作者
Wang, Yizheng [1 ]
Shi, Enhao [2 ]
Xu, Yang [3 ]
Hu, Jiahua [3 ]
Feng, Changsen [2 ]
机构
[1] Zhejiang Elect Power Co, Econ Res Inst State Grid, Hangzhou 310000, Peoples R China
[2] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou 310023, Peoples R China
[3] State Grid Zhejiang Elect Power Co Ltd, Hangzhou 310000, Peoples R China
关键词
electricity futures; price risk mitigation; power producer; multi-agent deep reinforcement learning; portfolio strategies; MARKETS;
D O I
10.3390/en17215350
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
The global development and enhancement of electricity financial markets aim to mitigate price risk in the electricity spot market. Power producers utilize financial derivatives for both hedging and speculation, necessitating careful selection of portfolio strategies. Current research on investment strategies for power financial derivatives primarily emphasizes risk management, resulting in a lack of a comprehensive investment framework. This study analyzes six short-term electricity futures contracts: base day, base week, base weekend, peak day, peak week, and peak weekend. A multi-agent deep reinforcement learning algorithm, Dual-Q MADDPG, is employed to learn from interactions with both the spot and futures market environments, considering the hedging and speculative behaviors of power producers. Upon completion of model training, the algorithm enables power producers to derive optimal portfolio strategies. Numerical experiments conducted in the Nordic electricity spot and futures markets indicate that the proposed Dual-Q MADDPG algorithm effectively reduces price risk in the spot market while generating substantial speculative returns. This study contributes to lowering barriers for power generators in the power finance market, thereby facilitating the widespread adoption of financial instruments, which enhances market liquidity and stability.
引用
收藏
页数:23
相关论文
共 50 条
[21]   Multi-agent Deep Reinforcement Learning for Microgrid Energy Scheduling [J].
Zuo, Zhiqiang ;
Li, Zhi ;
Wang, Yijing .
2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, :6184-6189
[22]   Multi-Agent Deep Reinforcement Learning for Trajectory Design and Power Allocation in Multi-UAV Networks [J].
Zhao, Nan ;
Liu, Zehua ;
Cheng, Yiqiang .
IEEE ACCESS, 2020, 8 :139670-139679
[23]   Channel and Power Allocation for Multi-Cell NOMA Using Multi-Agent Deep Reinforcement Learning and Unsupervised Learning [J].
Sun, Ming ;
Zhong, Yihe ;
He, Xiaoou ;
Zhang, Jie .
SENSORS, 2025, 25 (09)
[24]   Power Allocation for Millimeter-Wave Railway Systems with Multi-Agent Deep Reinforcement Learning [J].
Xu, Jianpeng ;
Ai, Bo ;
Sun, Yannan ;
Chen, Yali .
2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
[25]   Flexible Exploration Strategies in Multi-Agent Reinforcement Learning for Instability by Mutual Learning [J].
Miyashita, Yuki ;
Sugawara, Toshiharu .
2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, :579-584
[26]   Multi-Agent Deep Reinforcement Learning Based Handover Strategy for LEO Satellite Networks [J].
Lee, Chungnyeong ;
Bang, Inkyu ;
Kim, Taehoon ;
Lee, Howon ;
Jung, Bang Chul ;
Chae, Seong Ho .
IEEE COMMUNICATIONS LETTERS, 2025, 29 (05) :1117-1121
[27]   Residential demand response online optimization based on multi-agent deep reinforcement learning [J].
Yuan, Quan .
ELECTRIC POWER SYSTEMS RESEARCH, 2024, 237
[28]   AGV path planning based on Petri net and multi-agent deep reinforcement learning [J].
Yu, Shao-Qi ;
Tian, Yu-Ping .
Kongzhi yu Juece/Control and Decision, 2025, 40 (05) :1438-1446
[29]   Ship cooperative collision avoidance strategy based on multi-agent deep reinforcement learning [J].
Sui L.-R. ;
Gao S. ;
He W. .
Kongzhi yu Juece/Control and Decision, 2023, 38 (05) :1395-1402
[30]   Collaborative Collision Avoidance Approach for USVs Based on Multi-Agent Deep Reinforcement Learning [J].
Wang, Zhiwen ;
Chen, Pengfei ;
Chen, Linying ;
Mou, Junmin .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025, 26 (04) :4780-4794