SFNAS-DDPG: A Biomass-Based Energy Hub Dynamic Scheduling Approach via Connecting Supervised Federated Neural Architecture Search and Deep Deterministic Policy Gradient

被引:1
作者
Dolatabadi, Amirhossein [1 ]
Abdeltawab, Hussein [2 ]
Mohamed, Yasser Abdel-Rady I. [1 ]
机构
[1] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T6G 2V4, Canada
[2] Wake Forest Univ, Dept Engn, Winston Salem, NC 27101 USA
关键词
Actor-critic deep reinforcement learning; biomass energy; energy hub; federated learning (FL); neural architecture search (NAS); POWER; SYSTEM; BIOGAS; SOLAR; OPERATION; MODEL; MANAGEMENT; STRATEGY;
D O I
10.1109/ACCESS.2024.3352032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The transition to a near-zero-emission power and energy industry for facing up to global warming issues is dominated by the use of renewable energy resources such as bioenergy and solar energy. When these resources are coordinated within an energy hub framework, the system's flexibility is increased and dispatchable energy is provided by enhancing the share of renewable-dominated power. This paper proposes a dynamic scheduling framework for an energy hub with a biomass-solar hybrid renewable system. A hybrid forecasting model based on convolutional neural networks (CNNs) and Gated recurrent units (GRUs) is developed first to capture solar-related uncertainty sensibly, which will provide a great opportunity for the learning-based controller to determine an effective operation strategy in an optimal manner, especially on a cloudy-weather day. Then, a supervised federated neural architecture search (SFNAS) technique has been presented to eliminate the need for manual engineering of deep neural network models and the unnecessary computational burden associated with them. Finally, the deep deterministic policy gradient (DDPG), as an actor-critic deep reinforcement learning (DRL) methodology, enables the biomass-based energy hub to achieve cost-effective dynamic control strategies by addressing the decision-making problem as a highly dynamic continuous state-action model. The major conclusions of the numerical results show the effectiveness of the proposed SFNAS-DDPG method from average operating cost reduction up to 7.31% compared to the conventional DDPG model.
引用
收藏
页码:7674 / 7688
页数:15
相关论文
共 61 条
[1]   Hedging Strategies for Heat and Electricity Consumers in the Presence of Real-Time Demand Response Programs [J].
Alipour, Manijeh ;
Zare, Kazem ;
Zareipour, Hamidreza ;
Seyedi, Heresh .
IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2019, 10 (03) :1262-1270
[2]  
Baker B, 2017, Arxiv, DOI arXiv:1705.10823
[3]  
Baker B, 2017, Arxiv, DOI arXiv:1611.02167
[4]   Combined Heat and Power Economic Dispatch by Using Differential Evolution [J].
Basu, M. .
ELECTRIC POWER COMPONENTS AND SYSTEMS, 2010, 38 (08) :996-1004
[5]  
Bender G, 2018, PR MACH LEARN RES, V80
[6]  
Brock A, 2017, Arxiv, DOI [arXiv:1708.05344, DOI 10.48550/ARXIV.1708.05344]
[7]   Double Deep Q-Learning-Based Distributed Operation of Battery Energy Storage System Considering Uncertainties [J].
Bui, Van-Hai ;
Hussain, Akhtar ;
Kim, Hak-Man .
IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (01) :457-469
[8]  
Cai H, 2019, Arxiv, DOI arXiv:1812.00332
[9]   Deep Reinforcement Learning-Based Energy Storage Arbitrage With Accurate Lithium-Ion Battery Degradation Model [J].
Cao, Jun ;
Harrold, Dan ;
Fan, Zhong ;
Morstyn, Thomas ;
Healey, David ;
Li, Kang .
IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (05) :4513-4521
[10]   Optimal Design and Operation of a Low Carbon Community Based Multi-Energy Systems Considering EV Integration [J].
Cao, Jun ;
Crozier, Constance ;
McCulloch, Malcolm ;
Fan, Zhong .
IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2019, 10 (03) :1217-1226