Out-of-Distribution-Aware Electric Vehicle Charging

被引:0
作者
Li, Tongxin [1 ]
Sun, Chenxi [2 ]
机构
[1] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518172, Peoples R China
基金
中国国家自然科学基金;
关键词
Electric vehicle charging; Heuristic algorithms; Robustness; Costs; Charging stations; Dynamic scheduling; Uncertainty; Electric vehicles (EVs); model predictive control (MPC); scheduling; PREDICTIVE CONTROL; STRATEGY;
D O I
10.1109/TTE.2024.3434750
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We tackle the challenge of learning to charge electric vehicles (EVs) with out-of-distribution (OOD) data. Traditional scheduling algorithms typically fail to balance near-optimal average performance with worst case guarantees, particularly with OOD data. Model predictive control (MPC) is often too conservative and data-independent, whereas reinforcement learning (RL) tends to be overly aggressive and fully trusts the data, hindering their ability to consistently achieve the best-of-both-worlds. To bridge this gap, we introduce a novel OOD-aware scheduling algorithm, denoted OOD-Charging. This algorithm uses a dynamic "awareness radius," which updates in real-time based on the temporal difference (TD)-error that reflects the severity of OOD. The OOD-Charging algorithm allows for a more effective balance between consistency and robustness in EV charging schedules, thereby significantly enhancing adaptability and efficiency in real-world charging environments. Our results demonstrate that this approach improves the scheduling reward reliably under real OOD scenarios with remarkable shifts of EV charging behaviors caused by COVID-19 in the Caltech adaptive charging network (ACN)-Data.
引用
收藏
页码:3114 / 3124
页数:11
相关论文
共 50 条
[1]  
Anand K, 2022, PR MACH LEARN RES, P582
[2]  
Antoniadis Antonios, 2020, P 37 INT C MACHINE L, V119, P345
[3]   Robust Joint Expansion Planning of Electrical Distribution Systems and EV Charging Stations [J].
Arias, Nataly Banol ;
Tabares, Alejandra ;
Franco, John F. ;
Lavorato, Marina ;
Romero, Ruben .
IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2018, 9 (02) :884-894
[4]  
Balkanski E., 2024, P ADV NEUR INF PROC, V36, P1
[5]  
Banerjee Soumya, 2020, ADV NEUR IN, V33
[6]  
Brockman G, 2016, Arxiv, DOI arXiv:1606.01540
[7]   Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning [J].
Brunke, Lukas ;
Greeff, Melissa ;
Hall, Adam W. ;
Yuan, Zhaocong ;
Zhou, Siqi ;
Panerati, Jacopo ;
Schoellig, Angela P. .
ANNUAL REVIEW OF CONTROL ROBOTICS AND AUTONOMOUS SYSTEMS, 2022, 5 :411-444
[8]   Dynamic Price Vector Formation Model-Based Automatic Demand Response Strategy for PV-Assisted EV Charging Stations [J].
Chen, Qifang ;
Wang, Fei ;
Hodge, Bri-Mathias ;
Zhang, Jianhua ;
Li, Zhigang ;
Shafie-Khah, Miadreza ;
Catalao, Joao P. S. .
IEEE TRANSACTIONS ON SMART GRID, 2017, 8 (06) :2903-2915
[9]   Reinforcement Learning for Selective Key Applications in Power Systems: Recent Advances and Future Challenges [J].
Chen, Xin ;
Qu, Guannan ;
Tang, Yujie ;
Low, Steven ;
Li, Na .
IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (04) :2935-2958
[10]  
Cheng R, 2019, PR MACH LEARN RES, V97