Reinforcement Learning-Based Microgrid Energy Trading With a Reduced Power Plant Schedule

被引:83
作者
Lu, Xiaozhen [1 ]
Xiao, Xingyu [1 ]
Xiao, Liang [1 ,2 ,3 ]
Dai, Canhuang [1 ]
Peng, Mugen [4 ]
Poor, H. Vincent [5 ]
机构
[1] Xiamen Univ, Dept Informat & Commun Engn, Xiamen 361005, Peoples R China
[2] Xiamen Univ, Dept Cybersecur, Xiamen 361005, Peoples R China
[3] Southeast Univ, Natl Mobile Commun Res Lab, Nanjing 210096, Jiangsu, Peoples R China
[4] Beijing Univ Posts & Telecommun, Minist Educ, Key Lab Universal Wireless Commun, Beijing 100876, Peoples R China
[5] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Energy trading; power plant schedule; reinforcement learning (RL); smart grids; OPTIMIZATION; EXCHANGE;
D O I
10.1109/JIOT.2019.2941498
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With dynamic renewable energy generation and power demand, microgrids (MGs) exchange energy with each other to reduce their dependence on power plants. In this article, we present a reinforcement learning (RL)-based MG energy trading scheme to choose the electric energy trading policy according to the predicted future renewable energy generation, the estimated future power demand, and the MG battery level. This scheme designs a deep RL-based energy trading algorithm to address the supply-demand mismatch problem for a smart grid with a large number of MGs without relying on the renewable energy generation and power demand models of other MGs. A performance bound on the MG utility and dependence on the power plant is provided. Simulation results based on a smart grid with three MGs using wind speed data from Hong Kong Observation and electricity prices from ISO New England show that this scheme significantly reduces the average power plant schedule and thus increases the MG utility in comparison with a benchmark methodology.
引用
收藏
页码:10728 / 10737
页数:10
相关论文
共 32 条
[1]  
[Anonymous], IEEE INTERNET THINGS
[2]  
Baeyens E, 2011, IEEE DECIS CONTR P, P3000
[3]  
Dalal G, 2016, PR MACH LEARN RES, V48
[4]   Learning in Network Games with Incomplete Information [J].
Eksin, Ceyhun ;
Molavi, Pooya ;
Ribeiro, Alejandro ;
Jadbabaie, Ali .
IEEE SIGNAL PROCESSING MAGAZINE, 2013, 30 (03) :30-42
[5]   Smart Grid - The New and Improved Power Grid: A Survey [J].
Fang, Xi ;
Misra, Satyajayant ;
Xue, Guoliang ;
Yang, Dejun .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2012, 14 (04) :944-980
[6]  
Guan CX, 2015, CONSUM COMM NETWORK, P637, DOI 10.1109/CCNC.2015.7158054
[7]  
He KM, 2015, PROC CVPR IEEE, P5353, DOI 10.1109/CVPR.2015.7299173
[8]  
JIN C., 2018, Advances in neural information processing systems
[9]   Dynamic Pricing and Energy Consumption Scheduling With Reinforcement Learning [J].
Kim, Byung-Gook ;
Zhang, Yu ;
van der Schaar, Mihaela ;
Lee, Jang-Won .
IEEE TRANSACTIONS ON SMART GRID, 2016, 7 (05) :2187-2198
[10]   Smart Distribution: Coupled Microgrids [J].
Lasseter, Robert H. .
PROCEEDINGS OF THE IEEE, 2011, 99 (06) :1074-1082