Interpretability Latent Space Method: Exploiting Shapley Representation to Explain Latent Space

被引:0
作者
Liu, Zitu [1 ]
Li, Jiawang [1 ]
Liu, Yue [1 ]
Liu, Qun [2 ]
Wang, Guoyin [2 ]
Guo, Yike [3 ]
机构
[1] Shanghai Univ, Sch Comp Engn & Sci, Shanghai, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Computat Intelligence, Chongqing, Peoples R China
[3] Imperial Coll, Dept Comp, London, England
来源
2021 7TH INTERNATIONAL CONFERENCE ON BIG DATA AND INFORMATION ANALYTICS, BIGDIA | 2021年
基金
中国国家自然科学基金;
关键词
Shapley value; Interpretability; Latent space;
D O I
10.1109/BIGDIA53151.2021.9619687
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Shapley values have become one of the most popular interpretation methods for feature attribution. These methods attribute the prediction of the input by the machine learning model to its basic feature. However, most of the previous work is based on Shapley's interpretation after the training is completed because of Shapley's calculation requirements (exponential time complexity), so that they can focus on post-hoc Shapley explanations. Therefore, we suggest trying to use Shapley value itself as a latent representation in the deep model to guide model training. First, we extract the latent space of the generation model as input of Shapley network. Then, we utilize Shapley network, which provides layer-wise transformation in the same forward pass and produce Shapley representation. Further, we apply Shapley representation to the calculation of the new model to explain the training process of the model. Finally, the experimental results show on the two datasets of MNIST and Fashion MNIST that our proposed method has certain interpretability.
引用
收藏
页码:87 / 92
页数:6
相关论文
共 18 条
[1]  
Ancona M, 2019, PR MACH LEARN RES, V97
[2]   Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission [J].
Caruana, Rich ;
Lou, Yin ;
Gehrke, Johannes ;
Koch, Paul ;
Sturm, Marc ;
Elhadad, Noemie .
KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, :1721-1730
[3]  
Covert I, 2021, Arxiv, DOI arXiv:2012.01536
[4]   Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems [J].
Datta, Anupam ;
Sen, Shayak ;
Zick, Yair .
2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, :598-617
[5]  
FRYE C, 2020, ADV NEUR IN, V33, pNIL12
[6]  
Ghorbani A, 2020, Arxiv, DOI arXiv:2002.09815
[7]  
Kumar IE, 2020, PR MACH LEARN RES, V119
[8]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324
[9]  
Lou Y, 2012, P 18 ACM SIGKDD INT, P150, DOI DOI 10.1145/2339530.2339556
[10]  
Lundberg SM, 2017, ADV NEUR IN, V30