Optimization of Peer-to-Peer Energy Trading With a Model-Based Deep Reinforcement Learning in a Non-Sharing Information Scenario

被引:1
作者
Uthayansuthi, Nat [1 ]
Vateekul, Peerapon [1 ]
机构
[1] Chulalongkorn Univ, Fac Engn, Dept Comp Engn, Pathumwan 10330, Bangkok, Thailand
关键词
Model-based deep reinforcement learning; multi-agent deep reinforcement learning; peer-to-peer energy trading; non-sharing information;
D O I
10.1109/ACCESS.2024.3442445
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the realm of sustainable energy distribution, peer-to-peer (P2P) trading within microgrids has emerged as a promising solution, fostering decentralization and efficiency. While previous studies focused on optimizing P2P trading, they often relied on impractical assumptions regarding private information sharing among prosumers. To overcome this limitation, we aim to optimize P2P energy trading within the microgrid based on a realistic assumption (not sharing information), using our proposed model-based multi-agent deep reinforcement learning model. Firstly, our framework integrates long short-term memory (LSTM) for the policy model. Secondly, our model-based framework is based on temporal fusion transformers (TFT) for 24h-ahead net load consumption. Thirdly, the global horizontal index (GHI) is added to the model. Finally, a clustering technique helps to segment a large number of households into small household groups. The experiment was conducted on the Ausgrid dataset, consisting of 300 households in Sydney, Australia. Results demonstrate that our model achieved 4.20% and 3.95% lower microgrid electricity costs than MADDPG and A3C3, the sharing-info-based models. Moreover, it shows 12.48% lower costs than directly trading energy with the utility grid.
引用
收藏
页码:111021 / 111034
页数:14
相关论文
共 39 条
[1]   The k-means Algorithm: A Comprehensive Survey and Performance Evaluation [J].
Ahmed, Mohiuddin ;
Seraj, Raihan ;
Islam, Syed Mohammed Shamsul .
ELECTRONICS, 2020, 9 (08) :1-12
[2]  
[Anonymous], 2023, Int. J. Electr. Power Energy Syst., V147
[3]   A Reliable Energy Trading Strategy in Intelligent Microgrids Using Deep Reinforcement Learning [J].
Cao, Man ;
Yin, Zhiyong ;
Wang, Yajun ;
Yu, Le ;
Shi, Peiran ;
Cai, Zhi .
COMPUTERS & ELECTRICAL ENGINEERING, 2023, 110
[4]   Twin-Delayed DDPG: A Deep Reinforcement Learning Technique to Model a Continuous Movement of an Intelligent Robot Agent [J].
Dankwa, Stephen ;
Zheng, Wenfeng .
ICVISP 2019: PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON VISION, IMAGE AND SIGNAL PROCESSING, 2019,
[5]   An empirical analysis of carbon emission efficiency in food production across the Yangtze River basin: Towards sustainable agricultural development and carbon neutrality [J].
Elahi, Ehsan ;
Zhu, Min ;
Khalid, Zainab ;
Wei, Kezhen .
AGRICULTURAL SYSTEMS, 2024, 218
[6]  
Franois-Lavet D., 2016, P EUR WORKSH REINF L, P1
[7]  
Haarnoja T, 2019, Arxiv, DOI arXiv:1812.05905
[8]   Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning [J].
Harrold, Daniel J. B. ;
Cao, Jun ;
Fan, Zhong .
APPLIED ENERGY, 2022, 318
[9]  
Isaboke Y., 2023, Green Low-Carbon Econ-omy, V1, P1
[10]  
Janner M, 2019, ADV NEUR IN, V32