Deep reinforcement learning control of electric vehicle charging in the of

被引:75
作者
Dorokhova, Marina [1 ]
Martinson, Yann [1 ]
Ballif, Christophe [1 ]
Wyrsch, Nicolas [1 ]
机构
[1] Ecole Polytech Fed Lausanne EPFL, Photovolta & Thin Film Elect Lab PV Lab, Inst Microengn IMT, Rue Maladiere 71b, CH-2000 Neuchatel, Switzerland
关键词
Electric vehicles; EV charging; Model-free control; PV self-consumption; Reinforcement learning; State-of-charge;
D O I
10.1016/j.apenergy.2021.117504
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
In recent years, the importance of electric mobility has increased in response to climate change. The fast-growing deployment of electric vehicles (EVs) worldwide is expected to decrease transportation-related CO2 emissions, facilitate the integration of renewables, and support the grid through demand-response services. Simultaneously, inadequate EV charging patterns can lead to undesirable effects in grid operation, such as high peak-loads or low self-consumption of solar electricity, thus calling for novel methods of control. This work focuses on applying deep reinforcement learning (RL) to the EV charging control problem with the objectives to increase photovoltaic self-consumption and EV state of charge at departure. Particularly, we propose mathematical formulations of environments with discrete, continuous, and parametrized action spaces and respective deep RL algorithms to resolve them. The benchmarking of the deep RL control against naive, rule-based, deterministic optimization, and model-predictive control demonstrates that the suggested methodology can produce consistent and employable EV charging strategies, while its performance holds a great promise for real-time implementations.
引用
收藏
页数:17
相关论文
共 48 条
[41]   Reinforcement learning for demand response: A review of algorithms and modeling techniques [J].
Vazquez-Canteli, Jose R. ;
Nagy, Zoltan .
APPLIED ENERGY, 2019, 235 :1072-1089
[42]  
Wan Z, 2018, 2018 IEEE INT C COMM, P1, DOI DOI 10.1109/ICC.2018.8422631
[43]   Model-Free Real-Time EV Charging Scheduling Based on Deep Reinforcement Learning [J].
Wan, Zhiqiang ;
Li, Hepeng ;
He, Haibo ;
Prokhorov, Danil .
IEEE TRANSACTIONS ON SMART GRID, 2019, 10 (05) :5246-5257
[44]  
Xiong Jiechao, 2018, Parametrized deep q-networks learning: Reinforcement learning with discrete-continuous hybrid action space
[45]   Model-Free Real-Time Autonomous Control for a Residential Multi-Energy System Using Deep Reinforcement Learning [J].
Ye, Yujian ;
Qiu, Dawei ;
Wu, Xiaodong ;
Strbac, Goran ;
Ward, Jonathan .
IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (04) :3068-3082
[46]   Deep Reinforcement Learning for Smart Home Energy Management [J].
Yu, Liang ;
Xie, Weiwei ;
Xie, Di ;
Zou, Yulong ;
Zhang, Dengyin ;
Sun, Zhixin ;
Zhang, Linghua ;
Zhang, Yue ;
Jiang, Tao .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (04) :2751-2762
[47]   CDDPG: A Deep-Reinforcement-Learning-Based Approach for Electric Vehicle Charging Control [J].
Zhang, Feiye ;
Yang, Qingyu ;
An, Dou .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (05) :3075-3087
[48]  
Zuo SX, 2017, 2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE ROBIO 2017), P2450, DOI 10.1109/ROBIO.2017.8324787