Reinforcement learning for the optimization of electric vehicle virtual power plants

被引:17
作者
Al-Gabalawy, Mostafa [1 ]
机构
[1] Future Univ Egypt, Fac Comp & Informat Technol, Dept Digital Media Technol, Cairo, Egypt
关键词
distributed energy resources; electric vehicle; machine learning; optimization; reinforcement learning; virtual power plants; BIDDING STRATEGY; MARKETS; INTEGRATION; ENERGY; FLEETS; MODEL;
D O I
10.1002/2050-7038.12951
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Integrating weather-dependent renewable energy sources into the electricity system impose challenges on the power grid. Balancing services are needed, which can be provided by virtual power plants (VPP) that aggregate distributed energy resources (DER) to consume or produce electricity on demand. Electric vehicle (EV) fleets can use idle cars' batteries as combined storage to offer balancing services on smart electricity markets. However, there are risks associated with this business model extension. The fleet faces severe imbalance penalties if it cannot charge the offered amount of balancing energy due to the vehicles' unpredicted mobility demand. Ensuring the fleet can fulfill all market commitments risks denying profitable customer rentals. We study the design of a decision support system that estimates these risks, dynamically adjusts the composition of a VPP portfolio, and profitably places bids on multiple electricity markets simultaneously. Here we show that a reinforcement learning agent can optimize the VPP portfolio by learning from favorable market conditions and fleet demand uncertainties. In comparison to previous research, in which the bidding risks were unknown and fleets could only offer conservative amounts of balancing power to a single market, our proposed approach increases the amount of offered balancing power by 48% to 82% and achieves a charging cost reduction of the fleet by 25%. In experiments with real-world carsharing data of 500 EVs, we found that mobility demand forecasting algorithms' accuracy is crucial for a successful bidding strategy. Moreover, we show that recent advancements in deep reinforcement learning decrease the convergence time and improve the results' robustness. Our results demonstrate how modern RL algorithms can be successfully used for fleet management, VPP optimization, and demand response in the smart grid. We anticipate that DER, such as EVs, will play an essential role in providing reliable backup power for the grid and formulate market design recommendations to allow easier access to these resources.
引用
收藏
页数:30
相关论文
共 65 条
[41]  
Rummery Gavin A, 1994, On-Line Q-Learning Using Connectionist Systems, V37
[42]   Electric vehicles in imperfect electricity markets: The case of Germany [J].
Schill, Wolf-Peter .
ENERGY POLICY, 2011, 39 (10) :6178-6189
[43]  
Shi W, 2011, 2011 IEEE 29TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), P267, DOI 10.1109/ICCD.2011.6081407
[44]   Mastering the game of Go with deep neural networks and tree search [J].
Silver, David ;
Huang, Aja ;
Maddison, Chris J. ;
Guez, Arthur ;
Sifre, Laurent ;
van den Driessche, George ;
Schrittwieser, Julian ;
Antonoglou, Ioannis ;
Panneershelvam, Veda ;
Lanctot, Marc ;
Dieleman, Sander ;
Grewe, Dominik ;
Nham, John ;
Kalchbrenner, Nal ;
Sutskever, Ilya ;
Lillicrap, Timothy ;
Leach, Madeleine ;
Kavukcuoglu, Koray ;
Graepel, Thore ;
Hassabis, Demis .
NATURE, 2016, 529 (7587) :484-+
[45]   OR Forum-Modeling the Impacts of Electricity Tariffs on Plug-In Hybrid Electric Vehicle Charging, Costs, and Emissions [J].
Sioshansi, Ramteen .
OPERATIONS RESEARCH, 2012, 60 (03) :506-516
[46]  
Sterling D., 2018, 3 REVOLUTIONS STEERI, DOI [10.5822/978-1-61091-906-7, DOI 10.5822/978-1-61091-906-7]
[47]  
Sutton RS, 2018, ADAPT COMPUT MACH LE, P1
[48]  
Sutton RS, 1996, ADV NEUR IN, V8, P1038
[49]  
Taylor A, 2014, IEEE IJCNN, P2298, DOI 10.1109/IJCNN.2014.6889438
[50]   Using fleets of electric-drive vehicles for grid support [J].
Tomic, Jasna ;
Kempton, Willett .
JOURNAL OF POWER SOURCES, 2007, 168 (02) :459-468