Deep Neural Network-Based Surrogate Model for Optimal Component Sizing of Power Converters Using Deep Reinforcement Learning

被引:12
作者
Bui, Van-Hai [1 ]
Chang, Fangyuan [1 ]
Su, Wencong [1 ]
Wang, Mengqi [1 ]
Murphey, Yi Lu [1 ]
Da Silva, Felipe Leno [2 ]
Huang, Can [2 ]
Xue, Lingxiao [3 ]
Glatt, Ruben [2 ]
机构
[1] Univ Michigan, Coll Engn & Comp Sci, Dept Elect & Comp Engn, Dearborn, MI 48128 USA
[2] Lawrence Livermore Natl Lab LLNL, Livermore, CA 94550 USA
[3] Oak Ridge Natl Lab ORNL, Oak Ridge, TN 37830 USA
关键词
Optimization; Mathematical models; Training; Topology; Computational modeling; Switches; Semiconductor device modeling; Component sizing; deep reinforcement learning; deep neural networks; optimal design parameters; optimization; power converters; surrogate model; OPTIMIZATION; EFFICIENCY; DESIGN; FREQUENCY; ALGORITHM; PFC;
D O I
10.1109/ACCESS.2022.3194267
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The optimal design of power converters often requires a huge number of simulations and numeric analyses to determine the optimal parameters. This process is time-consuming and results in a high computational cost. Therefore, this paper proposes a deep reinforcement learning (DRL)-based optimization algorithm to optimize the design parameters for power converters using a deep neural network (DNN)-based surrogate model. The surrogate model of power converters can quickly estimate the power efficiency from input parameters without requiring any simulation. The proposed optimization model includes two major steps. In the first step, the surrogate model is trained offline using a large dataset. In the second step, a soft actor-critic-based optimization model interacts with the surrogate model from step 1 to determine the optimal values of design parameters in power converters. Unlike deep Q learning-based methods, the proposed method is able to handle large state and action spaces. In addition, using entropy-regularized reinforcement learning, our proposed method can accelerate and stabilize the learning process and also prevent trapping in local optima. Finally, to show the effectiveness of the proposed method, the performance of different optimization algorithms is compared, considering over ten power converter topologies.
引用
收藏
页码:78702 / 78712
页数:11
相关论文
共 27 条
[1]   Model-Free Reinforcement-Learning-Based Control Methodology for Power Electronic Converters [J].
Alfred, Dajr ;
Czarkowski, Dariusz ;
Teng, Jiaxin .
2021 13TH ANNUAL IEEE GREEN TECHNOLOGIES CONFERENCE GREENTECH 2021, 2021, :81-88
[2]   Advances in surrogate based modeling, feasibility analysis, and optimization: A review [J].
Bhosekar, Atharv ;
Ierapetritou, Marianthi .
COMPUTERS & CHEMICAL ENGINEERING, 2018, 108 :250-267
[3]   Real-time operation of distribution network: A deep reinforcement learning-based reconfiguration approach [J].
Bui, Van-Hai ;
Su, Wencong .
SUSTAINABLE ENERGY TECHNOLOGIES AND ASSESSMENTS, 2022, 50
[4]   Voltage Regulation of DC-DC Buck Converters Feeding CPLs via Deep Reinforcement Learning [J].
Cui, Chenggang ;
Yan, Nan ;
Huangfu, Baixiang ;
Yang, Tianxiao ;
Zhang, Chuanlin .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (03) :1777-1781
[5]   Metaheuristic Optimization Methods Applied to Power Converters: A Review [J].
Estefany De Leon-Aldaco, Susana ;
Calleja, Hugo ;
Aguayo Alquicira, Jesus .
IEEE TRANSACTIONS ON POWER ELECTRONICS, 2015, 30 (12) :6791-6803
[6]  
Fan S., 2021, P 8 EUROPEAN C SPACE, P1
[7]   Multiobjective Optimization of Medium-Frequency Transformers for Isolated Soft-Switching Converters Using a Genetic Algorithm [J].
Garcia-Bediaga, Asier ;
Villar, Irma ;
Rujas, Alejandro ;
Mir, Luis ;
Rufer, Alfred .
IEEE TRANSACTIONS ON POWER ELECTRONICS, 2017, 32 (04) :2995-3006
[8]   IoT-Based DC/DC Deep Learning Power Converter Control: Real-Time Implementation [J].
Gheisarnejad, Meysam ;
Khooban, Mohammad Hassan .
IEEE TRANSACTIONS ON POWER ELECTRONICS, 2020, 35 (12) :13621-13630
[9]  
Gu SX, 2017, Arxiv, DOI arXiv:1611.02247
[10]  
Haarnoja T, 2019, Arxiv, DOI arXiv:1812.05905