Feature Interpretation Using Generative Adversarial Networks (FIGAN): A Framework for Visualizing a CNN's Learned Features

被引:7
作者
Hasenstab, Kyle A. [1 ,2 ]
Huynh, Justin [2 ]
Masoudi, Samira [3 ]
Cunha, Guilherme M. [4 ]
Pazzani, Michael [3 ]
Hsiao, Albert [2 ,3 ]
机构
[1] San Diego State Univ, Dept Math & Stat, San Diego, CA 92182 USA
[2] Univ Calif San Diego, Dept Radiol, San Diego, CA 92093 USA
[3] Univ Calif San Diego, Halicioglu Data Sci Inst, San Diego, CA 92093 USA
[4] Univ Washington, Dept Radiol, Seattle, WA 98195 USA
基金
美国国家科学基金会;
关键词
Generative adversarial networks; Convolutional neural networks; Feature extraction; Visualization; Training data; Artificial intelligence; explainable AI; feature interpretation; generative adversarial networks;
D O I
10.1109/ACCESS.2023.3236575
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks (CNNs) are increasingly being explored and used for a variety of classification tasks in medical imaging, but current methods for post hoc explainability are limited. Most commonly used methods highlight portions of the input image that contribute to classification. While this provides a form of spatial localization relevant for focal disease processes, it may not be sufficient for co-localized or diffuse disease processes such as pulmonary edema or fibrosis. For the latter, new methods are required to isolate diffuse texture features employed by the CNN where localization alone is ambiguous. We therefore propose a novel strategy for eliciting explainability, called Feature Interpretation using Generative Adversarial Networks (FIGAN), which provides visualization of features used by a CNN for classification or regression. FIGAN uses a conditional generative adversarial network to synthesize images that span the range of a CNN's principal embedded features. We apply FIGAN to two previously developed CNNs and show that the resulting feature interpretations can clarify ambiguities within attention areas highlighted by existing explainability methods. In addition, we perform a series of experiments to study the effect of auxiliary segmentations, training sample size, and image resolution on FIGAN's ability to provide consistent and interpretable synthetic images.
引用
收藏
页码:5144 / 5160
页数:17
相关论文
共 46 条
[41]   A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI [J].
Tjoa, Erico ;
Guan, Cuntai .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (11) :4793-4813
[42]  
Springenberg JT, 2015, Arxiv, DOI arXiv:1412.6806
[43]   SciPy 1.0: fundamental algorithms for scientific computing in Python']Python [J].
Virtanen, Pauli ;
Gommers, Ralf ;
Oliphant, Travis E. ;
Haberland, Matt ;
Reddy, Tyler ;
Cournapeau, David ;
Burovski, Evgeni ;
Peterson, Pearu ;
Weckesser, Warren ;
Bright, Jonathan ;
van der Walt, Stefan J. ;
Brett, Matthew ;
Wilson, Joshua ;
Millman, K. Jarrod ;
Mayorov, Nikolay ;
Nelson, Andrew R. J. ;
Jones, Eric ;
Kern, Robert ;
Larson, Eric ;
Carey, C. J. ;
Polat, Ilhan ;
Feng, Yu ;
Moore, Eric W. ;
VanderPlas, Jake ;
Laxalde, Denis ;
Perktold, Josef ;
Cimrman, Robert ;
Henriksen, Ian ;
Quintero, E. A. ;
Harris, Charles R. ;
Archibald, Anne M. ;
Ribeiro, Antonio H. ;
Pedregosa, Fabian ;
van Mulbregt, Paul .
NATURE METHODS, 2020, 17 (03) :261-272
[44]   IInfoGAN: Improved Information Maximizing Generative Adversarial Networks [J].
Wang, Yan ;
Wang, Pujia ;
Sun, Boyang ;
He, Kai ;
Huang, Lan .
2020 5TH INTERNATIONAL CONFERENCE ON MECHANICAL, CONTROL AND COMPUTER ENGINEERING (ICMCCE 2020), 2020, :1487-1490
[45]   PLS-regression:: a basic tool of chemometrics [J].
Wold, S ;
Sjöström, M ;
Eriksson, L .
CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2001, 58 (02) :109-130
[46]   Convolutional neural networks: an overview and application in radiology [J].
Yamashita, Rikiya ;
Nishio, Mizuho ;
Do, Richard Kinh Gian ;
Togashi, Kaori .
INSIGHTS INTO IMAGING, 2018, 9 (04) :611-629