Optimal Tap Setting of Voltage Regulation Transformers Using Batch Reinforcement Learning

被引:103
作者
Xu, Hanchen [1 ]
Dominguez-Garcia, Alejandro D. [1 ]
Sauer, Peter W. [1 ]
机构
[1] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61801 USA
关键词
Voltage measurement; Voltage control; Power distribution; Reinforcement learning; Markov processes; Reactive power; On load tap changers; Voltage regulation; load tap changer; data-driven; Markov decision process; reinforcement learning; DEMAND RESPONSE; SYSTEMS;
D O I
10.1109/TPWRS.2019.2948132
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we address the problem of setting the tap positions of load tap changers (LTCs) for voltage regulation in power distribution systems. The objective is to find a policy that maps measurements of voltage magnitudes and topology information to LTC tap ratio changes so as to minimize the voltage deviation across the system. We formulate this problem as a Markov decision process (MDP), and propose a data and computationally efficient batch reinforcement learning (RL) algorithm to solve it. To circumvent the "curse of dimensionality" resulting from the large state and action spaces, we propose a sequential learning algorithm to learn an action-value function for each LTC, based on which the optimal tap positions can be directly determined. By taking advantage of a linearized power flow model, we propose an algorithm to estimate the voltage magnitudes under different tap settings, which allows the RL algorithm to explore the state and action spaces freely offline without impacting the system operation. The effectiveness of the proposed algorithm is validated via numerical simulations on the IEEE 13-bus and 123-bus distribution test feeders.
引用
收藏
页码:1990 / 2001
页数:12
相关论文
共 25 条
[1]  
[Anonymous], 1994, POWER SYSTEM STABILI
[2]  
[Anonymous], 2019, DEEP REINFORCEMENT L
[3]   NETWORK RECONFIGURATION IN DISTRIBUTION-SYSTEMS FOR LOSS REDUCTION AND LOAD BALANCING [J].
BARAN, ME ;
WU, FF .
IEEE TRANSACTIONS ON POWER DELIVERY, 1989, 4 (02) :1401-1407
[4]   Structure Learning in Power Distribution Networks [J].
Deka, Deepjyoti ;
Backhaus, Scott ;
Chertkov, Michael .
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2018, 5 (03) :1061-1074
[5]  
Ernst D, 2005, J MACH LEARN RES, V6, P503
[6]   Power systems stability control: Reinforcement learning framework [J].
Ernst, D ;
Glavic, M ;
Wehenkel, L .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2004, 19 (01) :427-435
[7]   Reinforcement Learning for Electric Power System Decision and Control: Past Considerations and Perspectives [J].
Glavic, Mevludin ;
Fonteneau, Raphael ;
Ernst, Damien .
IFAC PAPERSONLINE, 2017, 50 (01) :6918-6927
[8]   Least-squares policy iteration [J].
Lagoudakis, MG ;
Parr, R .
JOURNAL OF MACHINE LEARNING RESEARCH, 2004, 4 (06) :1107-1149
[9]   Reactive Power and Voltage Control in Distribution Systems With Limited Switching Operations [J].
Liu, M. B. ;
Canizares, Claudio A. ;
Huang, W. .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2009, 24 (02) :889-899
[10]   Distributed Economic Dispatch in Microgrids Based on Cooperative Reinforcement Learning [J].
Liu, Weirong ;
Zhuang, Peng ;
Liang, Hao ;
Peng, Jun ;
Huang, Zhiwu .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (06) :2192-2203