Stochastic Optimal CPS Relaxed Control Methodology for Interconnected Power Systems Using Q-Learning Method

被引:26
作者
Yu, Tao [3 ]
Zhou, Bin [1 ]
Chan, Ka Wing [1 ]
Lu, En [2 ]
机构
[1] Hong Kong Polytech Univ, Dept Elect Engn, HKSAR, Hong Kong, Hong Kong, Peoples R China
[2] China So Power Grid Co, Guangdong Power Dispatching Ctr, Guangzhou 510600, Guangdong, Peoples R China
[3] S China Univ Technol, Coll Elect Engn, Guangzhou 510641, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Q-learning algorithm; Reinforcement learning; Automatic generation control; Control performance standard; Markov decision process; Optimal control; China Southern Power Grid; DYNAMIC ANALYSIS; PERFORMANCE; (ACE)OVER-BAR(1); NERCS;
D O I
10.1061/(ASCE)EY.1943-7897.0000017
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This paper presents the application and design of a novel stochastic optimal control methodology based on the Q-learning method for solving the automatic generation control (AGC) under the new control performance standards (CPS) for the North American Electric Reliability Council (NERC). The aims of CPS are to relax the control constraint requirements of AGC plant regulation and enhance the frequency dispatch support effect from interconnected control areas. The NERC's CPS-based AGC problem is a dynamic stochastic decision problem that can be modeled as a reinforcement learning (RL) problem based on the Markov decision process theory. In this paper, the Q-learning method is adopted as the RL core algorithm with CPS values regarded as the rewards from the interconnected power systems; the CPS control and relaxed control objectives are formulated as immediate reward functions by means of a linear weighted aggregative approach. By regulating a closed-loop CPS control rule to maximize the long-term discounted reward in the procedure of online learning, the optimal CPS control strategy can be gradually obtained. This paper also introduces a practical semisupervisory group prelearning method to improve the stability and convergence ability of Q-learning controllers during the prelearning process. Tests on the China Southern Power Grid demonstrate that the proposed control strategy can effectively enhance the robustness and relaxation property of AGC systems while CPS compliances are ensured. DOI:10.1061/(ASCE)EY.1943-7897.0000017. (C) 2011 American Society of Civil Engineers.
引用
收藏
页码:116 / 129
页数:14
相关论文
共 28 条
  • [1] Ahamed TPI, 2002, ELECTR POW SYST RES, V63, P9, DOI 10.1016/S0378-7796(02)00088-3
  • [2] [Anonymous], ELECT ENERGY SYSTEM
  • [3] [Anonymous], TR107813 EL POW RES
  • [4] Atic N, 2004, 2004 IEEE POWER ENGINEERING SOCIETY GENERAL MEETING, VOLS 1 AND 2, P855
  • [5] Power systems stability control: Reinforcement learning framework
    Ernst, D
    Glavic, M
    Wehenkel, L
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2004, 19 (01) : 427 - 435
  • [6] NERC compliant load frequency control design using fuzzy rules
    Feliachi, A
    Rerkpreedapong, D
    [J]. ELECTRIC POWER SYSTEMS RESEARCH, 2005, 73 (02) : 101 - 106
  • [7] Frank L.Lewis Vassilis L. Syrmos., 1995, Optimal Control, V3rd
  • [8] Gao Zong-he, 2005, Automation of Electric Power Systems, V29, P40
  • [9] Analysis of load frequency control performance assessment criteria
    Gross, G
    Lee, JW
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2001, 16 (03) : 520 - 525
  • [10] HINDERER K, 1970, FDN NONSTATIONARY DY, P33