Review and Prospect of Explainable Artificial Intelligence and Its Application in Power Systems

被引:0
作者
Wang X. [1 ]
Dou J. [1 ]
Liu Z. [1 ]
Liu C. [1 ]
Pu T. [2 ]
He J. [1 ]
机构
[1] School of Electrical Engineering, Beijing Jiaotong University, Beijing
[2] China Electric Power Research Institute, Beijing
来源
Dianli Xitong Zidonghua/Automation of Electric Power Systems | 2024年 / 48卷 / 04期
基金
中国国家自然科学基金;
关键词
artificial intelligence; explainability; machine learning; power system;
D O I
10.7500/AEPS20230509007
中图分类号
学科分类号
摘要
Explainable artificial intelligence (XAI), as a new type of artificial intelligence (AI) technologies, can present the logic of the AI process, reveal the AI black-box knowledge, and improve the credibility of the AI results. The deep coupling between XAI and power systems may accelerate the AI technology application in the power system and assist with the safety and stability of human-machine interaction. Therefore, this paper reviews the historical context, development needs, and hot technologies of XAI in the power system, summarizes its applications in source-load forecasting, operation control, fault diagnosis, and electricity market, and explores the application prospects of XAI in the power system around aspects such as interpretability, iterative framework, and number-matrix fusion. This paper aims to provide theoretical references and practical ideas for promoting the intelligent transformation and iterative human-machine interaction of the power system. © 2024 Automation of Electric Power Systems Press. All rights reserved.
引用
收藏
页码:169 / 191
页数:22
相关论文
共 147 条
  • [21] Circular on the issuance of the development plan for a new generation of artificial intelligence [EB/OL]
  • [22] Code of ethics for the new generation of artificial intelligence released[EB/OL]
  • [23] White paper on artificial intelligence governance and sustainable development practices[EB/OL]
  • [24] Helping the“new type of power system”come to fruition!State Grid Shandong Power builds credible AI load forecasting [EB/OL]
  • [25] Visualizing scientific landscapes[EB/OL]
  • [26] LUNDBERG S M,, LEE S I., A unified approach to interpreting model predictions[C], Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4768-4777, (2017)
  • [27] SINGH S, Why should I trust you?”:explaining the predictions of any classifier[C], Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, (2016)
  • [28] Grad-CAM:visual explanations from deep networks via gradient-based localization, 2017 IEEE International Conference on Computer Vision(ICCV), pp. 618-626, (2017)
  • [29] QUINLAN J R., C4.5: programs for machine learning[M], (1993)
  • [30] BREIMAN L., Random forests[J], Machine Learning, 45, 1, pp. 5-32, (2001)