Development of improved reinforcement learning smart charging strategy for electric vehicle fleet

被引:26
作者
Sultanuddin, S. J. [1 ]
Vibin, R. [2 ]
Kumar, A. Rajesh [3 ]
Behera, Nihar Ranjan [4 ]
Pasha, M. Jahir [5 ]
Baseer, K. K. [6 ]
机构
[1] Measi Inst Informat Technol, Chennai, India
[2] CMS Coll Engn & Technol, Elect & Elect Engn, Coimbatore, Tamil Nadu, India
[3] NSN Coll Engn & Technol, Comp Sci & Engn, Karur, Tamil Nadu, India
[4] Swiss Sch Business & Management Geneva, Ave Morgines 12, CH-1213 Petit Lancy, Switzerland
[5] G Pullaiah Coll Engn & Technol, Dept Comp Sci & Engn, Kurnool, Andhra Pradesh, India
[6] Mohan Babu Univ, Erstwhile Sree Vidyanikethan Engn Coll, Sch Comp, Tirupati, Andhra Pradesh, India
关键词
Electric vehicle; Smart charging; Reinforcement learning; Power grid; Optimization;
D O I
10.1016/j.est.2023.106987
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Due to its environmental and energy sustainability, electric vehicles (EV) have emerged as the preferred option in the current transportation system. Uncontrolled EV charging, however, can raise consumers; charging costs and overwhelm the grid. Smart charging coordination systems are required to prevent the grid overload caused by charging too many electric vehicles at once. In light of the baseload that is present in the power grid, this research suggests an improved reinforcement learning charging management system. An optimization method, however, requires some knowledge in advance, such as the time the vehicle departs and how much energy it will need when it arrives at the charging station. Therefore, under realistic operating conditions, our improved Reinforcement Learning method with Double Deep Q-learning approach provides an adjustable, scalable, and flexible strategy for an electric car fleet. Our proposed approach provides fair value which solves the over-estimation action value problem in deep Q-learning. Then, a number of different charging strategies are compared to the Reinforcement Learning algorithm. The proposed Reinforcement Learning technique minimizes the variance of the overall load by 68 % when compared to an uncontrolled charging strategy.
引用
收藏
页数:9
相关论文
共 25 条
  • [1] Smart Online Charging Algorithm for Electric Vehicles via Customized Actor-Critic Learning
    Cao, Yongsheng
    Wang, Hao
    Li, Demin
    Zhang, Guanglin
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (01): : 684 - 694
  • [2] Chang F., 2019, 2019 10 INT RENEW EN, P1
  • [3] Reinforcement Learning-Based Plug-in Electric Vehicle Charging With Forecasted Price
    Chis, Adriana
    Lunden, Jarmo
    Koivunen, Visa
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2017, 66 (05) : 3674 - 3684
  • [4] A Multiagent Federated Reinforcement Learning Approach for Plug-In Electric Vehicle Fleet Charging Coordination in a Residential Community
    Chu, Yunfei
    Wei, Zhinong
    Fang, Xicheng
    Chen, Sheng
    Zhou, Yizhou
    [J]. IEEE ACCESS, 2022, 10 : 98535 - 98548
  • [5] Optimal Electric Vehicle Charging Strategy With Markov Decision Process and Reinforcement Learning Technique
    Ding, Tao
    Zeng, Ziyu
    Bai, Jiawen
    Qin, Boyu
    Yang, Yongheng
    Shahidehpour, Mohammad
    [J]. IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, 2020, 56 (05) : 5811 - 5823
  • [6] Deep reinforcement learning control of electric vehicle charging in the of
    Dorokhova, Marina
    Martinson, Yann
    Ballif, Christophe
    Wyrsch, Nicolas
    [J]. APPLIED ENERGY, 2021, 301
  • [7] Dyo V, 2022, Arxiv, DOI arXiv:2202.10823
  • [8] Statistical characterisation of the real transaction data gathered from electric vehicle charging stations
    Flammini, Marco Giacomo
    Prettico, Giuseppe
    Julea, Andreea
    Fulli, Gianluca
    Mazza, Andrea
    Chicco, Gianfranco
    [J]. ELECTRIC POWER SYSTEMS RESEARCH, 2019, 166 : 136 - 150
  • [9] Data-driven smart charging for heterogeneous electric vehicle fleets
    Frendo, Oliver
    Graf, Jerome
    Gaertner, Nadine
    Stuckenschmidt, Heiner
    [J]. ENERGY AND AI, 2020, 1
  • [10] Heendeniya C. B., 2022, ENERGY INFORM, V5, P1, DOI [10.1186/s42162-022-00197-5, DOI 10.1186/S42162-022-00197-5]