A Local Explainability Technique for Graph Neural Topic Models

被引:0
作者
Bharathwajan Rajendran
Chandran G. Vidya
J. Sanil
S. Asharaf
机构
[1] Kerala University of Digital Sciences,School of Computer Science and Engineering
[2] Innovation and Technology,undefined
来源
Human-Centric Intelligent Systems | 2024年 / 4卷 / 1期
关键词
Explainable neural network; Graph neural topic model; Local explainable; Natural language processing; Topic modelling;
D O I
10.1007/s44230-023-00058-8
中图分类号
学科分类号
摘要
Topic modelling is a Natural Language Processing (NLP) technique that has gained popularity in the recent past. It identifies word co-occurrence patterns inside a document corpus to reveal hidden topics. Graph Neural Topic Model (GNTM) is a topic modelling technique that uses Graph Neural Networks (GNNs) to learn document representations effectively. It provides high-precision documents-topics and topics-words probability distributions. Such models find immense application in many sectors, including healthcare, financial services, and safety-critical systems like autonomous cars. This model is not explainable. As a matter of fact, the user cannot comprehend the underlying decision-making process. The paper introduces a technique to explain the documents-topics probability distributions output of GNTM. The explanation is achieved by building a local explainable model such as a probabilistic Naïve Bayes classifier. The experimental results using various benchmark NLP datasets show a fidelity of 88.39% between the predictions of GNTM and the local explainable model. This similarity implies that the proposed technique can effectively explain the documents-topics probability distribution output of GNTM.
引用
收藏
页码:53 / 76
页数:23
相关论文
共 122 条
[21]  
Silva CC(1999)An introduction to variational methods for graphical models Mach Learn 3 undefined-undefined
[22]  
Galster M(2023)Convntm: conversational neural topic model Proc. AAAI Conf. Artif. Intell. 6 undefined-undefined
[23]  
Gilson F(2021)A comprehensive survey on graph neural networks IEEE Trans. Neural Netw. Learn. Syst. undefined undefined-undefined
[24]  
Egger R(2023)Explainability in graph neural networks: a taxonomic survey IEEE Trans Pattern Anal Mach Intell undefined undefined-undefined
[25]  
Yu J(2023)Learning the explainable semantic relations via unified graph topic-disentangled neural networks ACM Trans Knowl Discov Data undefined undefined-undefined
[26]  
Liu W(2021)Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI Inform Fus undefined undefined-undefined
[27]  
Pang J(2021)Neural variational sparse topic model for sparse explainable text representation Inf Process Manag undefined undefined-undefined
[28]  
Li N(2009)What’s so special about Euclidean distance? Soc Choice Welf undefined undefined-undefined
[29]  
Zhou X(2020)Analysis of Euclidean distance and Manhattan distance in the K-means algorithm for variations number of centroid K J Phys Conf Ser undefined undefined-undefined
[30]  
Yue F(2023)Exploring evaluation methods for interpretable machine learning: a survey Information. undefined undefined-undefined