Secondary control activation analysed and predicted with explainable AI

被引:10
|
作者
Kruse, Johannes [1 ,2 ]
Schafer, Benjamin [3 ,4 ,5 ]
Witthaut, Dirk [1 ,2 ]
机构
[1] Forschungszentrum Julich, Inst Energy & Climate Res Syst Anal & Technol Eva, D-52428 Julich, Germany
[2] Univ Cologne, Inst Theoret Phys, D-50937 Cologne, Germany
[3] Queen Mary Univ London, Sch Math Sci, London E1 4NS, England
[4] Norwegian Univ Life Sci, Fac Sci & Technol, N-1432 As, Norway
[5] Karlsruhe Inst Technol, Inst Automat & Appl Informat, D-76344 Eggenstein Leopoldshafen, Germany
基金
欧盟地平线“2020”;
关键词
Power grid; Frequency; Control; Data-driven; Explainable AI; Machine learning; ARTIFICIAL-INTELLIGENCE; TRANSPARENCY;
D O I
10.1016/j.epsr.2022.108489
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The transition to a renewable energy system challenges power grid operation and stability. Secondary control is key in restoring the power system to its reference following a disturbance. Underestimating the necessary control capacity may require emergency measures, such that a solid understanding of its predictability and driving factors is needed. Here, we establish an explainable machine learning model for the analysis of secondary control power in Germany. Training gradient boosted trees, we obtain an accurate ex-post description of control activation. Our explainable model demonstrates the strong impact of external drivers such as forecasting errors and the generation mix, while daily patterns in the reserve activation play a minor role. Training a prototypical forecasting model, we identify forecast error estimates as crucial to improve predictability. Generally, input data and model training have to be carefully adapted to serve the different purposes of either ex-post analysis or forecasting and reserve sizing.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] From "Explainable AI" to "Graspable AI"
    Ghajargar, Maliheh
    Bardzell, Jeffrey
    Renner, Alison Smith
    Krogh, Peter Gall
    Hook, Kristina
    Cuartielles, David
    Boer, Laurens
    Wiberg, Mikael
    PROCEEDINGS OF THE FIFTEENTH INTERNATIONAL CONFERENCE ON TANGIBLE, EMBEDDED, AND EMBODIED INTERACTION, TEI 2021, 2021,
  • [2] Explainable AI: The New 42?
    Goebel, Randy
    Chander, Ajay
    Holzinger, Katharina
    Lecue, Freddy
    Akata, Zeynep
    Stumpf, Simone
    Kieseberg, Peter
    Holzinger, Andreas
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2018, 2018, 11015 : 295 - 303
  • [3] Combinatorial Methods for Explainable AI
    Kuhn, D. Richard
    Kacker, Raghu N.
    Lei, Yu
    Simos, Dimitris E.
    2020 IEEE 13TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW), 2020, : 167 - 170
  • [4] On the role of knowledge graphs in explainable AI
    Lecue, Freddy
    SEMANTIC WEB, 2020, 11 (01) : 41 - 51
  • [5] Explainable AI for trustworthy image analysis
    Turley, Jordan E.
    Dunne, Jeffrey A.
    Woods, Zerotti
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [6] Explainable AI for Soil Fertility Prediction
    Chandra, Harshiv
    Pawar, Pranav M.
    Elakkiya, R.
    Tamizharasan, P. S.
    Muthalagu, Raja
    Panthakkan, Alavikunhu
    IEEE ACCESS, 2023, 11 : 97866 - 97878
  • [7] Explainable Machine Learning for Trustworthy AI
    Giannotti, Fosca
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2022, 356 : 3 - 3
  • [8] Introduction to Explainable AI
    Liao, Q. Vera
    Singh, Moninder
    Zhang, Yunfeng
    Bellamy, Rachel K. E.
    CHI'20: EXTENDED ABSTRACTS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2020,
  • [9] Explainable AI for RAMS
    Zaman, Navid
    Apostolou, Evan
    Li, Yan
    Oister, Ken
    2022 68TH ANNUAL RELIABILITY AND MAINTAINABILITY SYMPOSIUM (RAMS 2022), 2022,
  • [10] Introduction to Explainable AI
    Liao, Q. Vera
    Singh, Moninder
    Zhang, Yunfeng
    Bellamy, Rachel K. E.
    EXTENDED ABSTRACTS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'21), 2021,