Q-learning based energy management system on operating reserve and supply distribution

被引:3
作者
Syu, Jia-Hao [1 ]
Lin, Jerry Chun -Wei [2 ]
Fojcik, Marcin [2 ]
Cupek, Rafal [3 ]
机构
[1] Natl Taiwan Univ, Taipei, Taiwan
[2] Western Norway Univ Appl Sci, Bergen, Norway
[3] Silesian Tech Univ, Gliwice, Poland
关键词
Operating reserve; Subsidy; Energy supply distribution; Energy management system; Smart grid; Q-learning; POWER-GENERATION; COST;
D O I
10.1016/j.seta.2023.103264
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Due to their lower cost and performance predictability, traditional energy sources outperform renewables. Therefore, one of the most important concerns of energy management systems in 6G smart grids is to plan the distribution of energy supply and ensure adequate stability. In this study, we present a two-stage Q -learning based energy management system called 2QEMS. This system calculates the subsidies for each type of energy supply and a dynamic multiplier for the operating reserve by using Q-tables in the agents. To achieve the desired distribution of energy sources and maintain the operating reserve, the agent converses with its environment, which includes energy suppliers, energy demanders, and an auction system. The 2QEMS operates by using clear states and intuitive actions that provide a high degree of interpretability. Experiments show that the 2QEMS reduces the convergence time to 740 days, with a mean absolute error (MAE) of the supply distribution of 6.8%. This is 57% and 70% reduction, respectively, compared to the traditional system, as well as 23% and 20% reduction, respectively, compared to the state-of-the-art systems. The experiments demonstrate the effectiveness and resilience of the 2QEMS by showing excellent and consistent performance in a variety of scenarios.
引用
收藏
页数:8
相关论文
共 50 条
[41]   Distributed Q-Learning for Energy Harvesting Heterogeneous Networks [J].
Miozzo, Marco ;
Giupponi, Lorenza ;
Rossi, Michele ;
Dini, Paolo .
2015 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION WORKSHOP (ICCW), 2015, :2006-2011
[42]   BEAM MANAGEMENT SOLUTION USING Q-LEARNING FRAMEWORK [J].
Araujo, Daniel C. ;
de Almeida, Andre L. F. .
2019 IEEE 8TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP 2019), 2019, :594-598
[43]   Q-Learning Based Routing in Optical Networks [J].
Bryant, Nolen B. ;
Chung, Kwok K. ;
Feng, Jie ;
Harris, Sommer ;
Umeh, Kristine N. ;
Aibin, Michal .
2022 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2022, :419-422
[44]   An Improved Q-Learning for System Power Optimization with Temperature, Performance and Energy Constraint Modeling [J].
Li, Lin ;
Guo, Shu ;
Meng, Lingshuai ;
Zhai, Haibin ;
Hui, Zhen ;
Ma, Bingnan ;
Shen, Shijun .
2020 IEEE CONFERENCE ON TELECOMMUNICATIONS, OPTICS AND COMPUTER SCIENCE (TOCS), 2020, :252-258
[45]   NOMA Power Allocation Based on Q-Learning [J].
Aermsa-Ard, Phetnakorn ;
Wangsamad, Chonticha ;
Mamat, Kritsada .
2022 37TH INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS AND COMMUNICATIONS (ITC-CSCC 2022), 2022, :892-895
[46]   A Novel Energy Management Strategy Based on Dual Reward Function Q-learning for Fuel Cell Hybrid Electric Vehicle [J].
Zhang, Yuxiang ;
Ma, Rui ;
Zhao, Dongdong ;
Huangfu, Yigeng ;
Liu, Weiguo .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (02) :1537-1547
[47]   Online Data-Driven Energy Management of a Hybrid Electric Vehicle Using Model-Based Q-Learning [J].
Lee, Heeyun ;
Kang, Changbeom ;
Park, Yeong-Il ;
Kim, Namwook ;
Cha, Suk Won .
IEEE ACCESS, 2020, 8 :84444-84454
[48]   A novel control strategy of automatic parallel parking system based on Q-learning [J].
Zhang, Lanjiang ;
Wu, Jian ;
Liu, Congzhi ;
Chen, Lilin ;
Yan, Ning .
PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART D-JOURNAL OF AUTOMOBILE ENGINEERING, 2024, 238 (04) :661-673
[49]   Q-Learning Based on Particle Swarm Optimization for Positioning System of Underwater Vehicles [J].
Gao Yan-zeng ;
Ye Jia-wei ;
Chen Yuan-ming ;
Liang Fu-ling .
2009 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND INTELLIGENT SYSTEMS, PROCEEDINGS, VOL 2, 2009, :68-71
[50]   The Improved Q-Learning Algorithm based on Pheromone Mechanism for Swarm Robot System [J].
Shi, Zhiguo ;
Tu, Jun ;
Zhang, Qiao ;
Zhang, Xiaomeng ;
Wei, Junming .
2013 32ND CHINESE CONTROL CONFERENCE (CCC), 2013, :6033-6038