Smart techno-economic operation of electric vehicle charging station in Egypt

被引:24
作者
Makeen, Peter [1 ,2 ]
Ghali, Hani A. [1 ]
Memon, Saim [2 ,3 ]
Duan, Fang [2 ]
机构
[1] British Univ Egypt BUE, Fac Engn, Elect Engn Dept, El Sherouk 11837, Egypt
[2] London South Bank Univ, Sch Engn, Elect & Elect Engn Div, 103 Borough Rd, London SE1 0AA, England
[3] Arden Univ, Sch Engn, Arden House,Middlemarch Pk, Coventry CV3, England
关键词
Aggregators; Electric vehicles; Electric vehicle charging station; DC Fast charging; Lithium -ion battery; LINEAR-PROGRAMMING MODEL; PROFIT MAXIMIZATION; ENERGY SYSTEM; OPTIMIZATION; ALGORITHM; SERVICE;
D O I
10.1016/j.energy.2022.126151
中图分类号
O414.1 [热力学];
学科分类号
摘要
Stochastic fast-charging of electric vehicles (EVs) affect the security and economic operation of the distribution power network. Aggregator awareness in the electric power industry is fast growing in tandem with the growing number of EVs. This paper proposes a novel smart techno-economic operation of the electric vehicle charging station (EVCS) in Egypt controlled by the aggregator based on a hierarchal model. The deterministic charging scheduling of the EVs is the upper stage of the model to balance the generated and consumed power of the station and flat the surplus power supplied to the utility grid. Mixed-integer linear programming (MILP) is used to solve the first stage where the peak demand value is reduced to 48.17% (4.5 kW) without using any extra battery storage systems. The second challenging stage is to maximize the charging station profit whilst minimizing the EV charging tariff, which needs a trade-off. In this stage, MILP and Markov Decision Process Reinforcement Learning (MDP-RL) resulted an increase in EVCS revenue by 28.88% and 20.10%, respectively. However, the EVs charging tariff is increased by 21.19%, and 15.03%, respectively. Hence, MDP-RL is an adequate algorithm for such a complex model. The outcomes reveal a sufficient techno-economic hierarchal model concerning the normal operation stated in the literature.
引用
收藏
页数:13
相关论文
共 66 条
[1]  
Administration IT, 2020, EG COUNTR COMM GUID
[2]   MILP-based customer-oriented E-Fleet charging scheduling platform [J].
Akaber, Parisa ;
Hughes, Timothy ;
Sobolev, Sergey .
IET SMART GRID, 2021, 4 (02) :215-223
[3]   Profit Maximization for EVSEs-Based Renewable Energy Sources in Smart Cities With Different Arrival Rate Scenarios [J].
Alghamdi, Turki G. ;
Said, Dhaou ;
Mouftah, Hussein T. .
IEEE ACCESS, 2021, 9 :58740-58754
[4]   Utilization of Electric Vehicles and Their Used Batteries for Peak-Load Shifting [J].
Aziz, Muhammad ;
Oda, Takuya ;
Mitani, Takashi ;
Watanabe, Yoko ;
Kashiwagi, Takao .
ENERGIES, 2015, 8 (05) :3720-3738
[5]   Energy Management Strategy in Dynamic Distribution Network Reconfiguration Considering Renewable Energy Resources and Storage [J].
Azizivahed, Ali ;
Arefi, Ali ;
Ghavidel, Sahand ;
Shafie-khah, Miadreza ;
Li, Li ;
Zhang, Jiangfeng ;
Catalao, Joao P. S. .
IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2020, 11 (02) :662-673
[6]  
Biroon RA, C FAST REGULAR ELECT, P1
[7]   Model Predictive Control for EV Aggregators Participating in System Frequency Regulation Market [J].
Cai, Sinan ;
Matsuhashi, Ryuji .
IEEE ACCESS, 2021, 9 :80763-80771
[8]   Sensitivity Analysis of Renewable Energy Integration on Stochastic Energy Management of Automated Reconfigurable Hybrid AC-DC Microgrid Considering DLR Security Constraint [J].
Dabbaghjamanesh, Morteza ;
Kavousi-Fard, Abdollah ;
Mehraeen, Shahab ;
Zhang, Jie ;
Dong, Zhao Yang .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (01) :120-131
[9]   Optimal Electric Vehicle Charging Strategy With Markov Decision Process and Reinforcement Learning Technique [J].
Ding, Tao ;
Zeng, Ziyu ;
Bai, Jiawen ;
Qin, Boyu ;
Yang, Yongheng ;
Shahidehpour, Mohammad .
IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, 2020, 56 (05) :5811-5823
[10]  
Dogan A, 2015, 2015 9TH INTERNATIONAL CONFERENCE ON ELECTRICAL AND ELECTRONICS ENGINEERING (ELECO), P374, DOI 10.1109/ELECO.2015.7394559