Quantitative comparison of explainable artificial intelligence methods for nuclear power plant accident diagnosis models

被引:0
作者
Kim, Seung Geun [1 ]
Ryu, Seunghyoung [2 ]
Jin, Kyungho [3 ]
Kim, Hyeonmin [3 ]
机构
[1] Korea Atom Energy Res Inst, Appl Artificial Intelligence Sect, Daedeok Daero 989 Beon Gil, Daejeon 34057, South Korea
[2] Sejong Univ, Dept Artificial Intelligence & Robot, Neungdong Ro 209 Beon Gil, Seoul 05006, South Korea
[3] Korea Atom Energy Res Inst, Risk Assessment Res Div, Daedeok Daero 989 Beon Gil, Daejeon 34057, South Korea
基金
新加坡国家研究基金会;
关键词
Artificial intelligence; Deep neural network; Explainable artificial intelligence; Nuclear power plant; Accident diagnosis; OPERATION;
D O I
10.1016/j.pnucene.2025.105605
中图分类号
TL [原子能技术]; O571 [原子核物理学];
学科分类号
0827 ; 082701 ;
摘要
The rapid advancement of artificial intelligence (AI) technology based on deep neural networks (DNNs) has spurred active development of DNN-based models in the nuclear domain. Due to the black-box nature of these models and the issue of low explainability, their practical application in safety-critical domains is hindered. To address this, numerous explainable AI (XAI) methods have been proposed. However, the selection of an appropriate XAI method is crucial as its performance significantly depends on various factors; nonetheless, comparative studies of XAI methods are limited within the nuclear domain. This study employs perturbation analysis for the quantitative comparison of XAI methods. A method for selecting an appropriate perturbing value is also proposed based on the concept of information entropy to yield reliable perturbation analysis results. For the experiment, a simple nuclear power plant (NPP) accident diagnosis model was developed to reflect the characteristics of the nuclear domain, and four XAI methods were applied for comparative analysis. The experimental results demonstrate that perturbation analysis and the proposed method are effective for quantitatively comparing the performance of XAI methods.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Explainable artificial intelligence for photovoltaic fault detection: A comparison of instruments
    Utama, Christian
    Meske, Christian
    Schneider, Johannes
    Schlatmann, Rutger
    Ulbrich, Carolin
    SOLAR ENERGY, 2023, 249 : 139 - 151
  • [32] Radioecology and the accident at the Chernobyl nuclear power plant
    R. M. Aleksakhin
    N. I. Sanzharova
    S. V. Fesenko
    Atomic Energy, 2006, 100 : 257 - 263
  • [33] Radioecology and the accident at the chernobyl nuclear power plant
    Aleksakhin, R. M.
    Sanzharova, N. I.
    Fesenko, S. V.
    ATOMIC ENERGY, 2006, 100 (04) : 257 - 263
  • [34] Explainable artificial intelligence (XAI) in radiology and nuclear medicine: a literature review
    de Vries, Bart M.
    Zwezerijnen, Gerben J. C.
    Burchell, George L.
    van Velden, Floris H. P.
    van Oordt, Catharina Willemien Menke-van der Houven
    Boellaard, Ronald
    FRONTIERS IN MEDICINE, 2023, 10
  • [35] Quant 4.0: engineering quantitative investment with automated, explainable, and knowledge-driven artificial intelligence
    Guo, Jian
    Wang, Saizhuo
    Ni, Lionel M.
    Shum, Heung-Yeung
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2024, 25 (11) : 1421 - 1445
  • [36] On the Use of Explainable Artificial Intelligence for the Differential Diagnosis of Pigmented Skin Lesions
    Hurtado, Sandro
    Nematzadeh, Hossein
    Garcia-Nieto, Jose
    Berciano-Guerrero, Miguel-Angel
    Navas-Delgado, Ismael
    BIOINFORMATICS AND BIOMEDICAL ENGINEERING, PT I, 2022, : 319 - 329
  • [37] Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models
    Salih, Ahmed
    Galazzo, Ilaria Boscolo
    Gkontra, Polyxeni
    Lee, Aaron Mark
    Lekadir, Karim
    Raisi-Estabragh, Zahra
    Petersen, Steffen E.
    CIRCULATION-CARDIOVASCULAR IMAGING, 2023, 16 (04) : E014519
  • [38] Explainable Artificial Intelligence Models for Predicting Depression Based on Polysomnographic Phenotypes
    Enkhbayar, Doljinsuren
    Ko, Jaehoon
    Oh, Somin
    Ferdushi, Rumana
    Kim, Jaesoo
    Key, Jaehong
    Urtnasan, Erdenebayar
    BIOENGINEERING-BASEL, 2025, 12 (02):
  • [39] Applying Explainable Artificial Intelligence Models for Understanding Depression Among IT Workers
    Adarsh, V.
    Gangadharan, G. R.
    IT PROFESSIONAL, 2022, 24 (05) : 25 - 29
  • [40] Reactive power control in photovoltaic systems through (explainable) artificial intelligence
    Utama, Christian
    Meske, Christian
    Schneider, Johannes
    Ulbrich, Carolin
    APPLIED ENERGY, 2022, 328