From Explanations to Segmentation: Using Explainable AI for Image Segmentation

被引:0
作者
Seibold, Clemens [1 ]
Kuenzel, Johannes [1 ]
Hilsmann, Anna [1 ]
Eisert, Peter [1 ,2 ]
机构
[1] Heinrich Hertz Inst Nachrichtentech Berlin GmbH, Fraunhofer Inst Telecommun, HHI, Einsteinufer 37, D-10587 Berlin, Germany
[2] Humboldt Univ, Visual Comp Grp, Unter Linden 6, D-10099 Berlin, Germany
来源
PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 4 | 2022年
关键词
Segmentation; Classification; LRP; Relevance; Annotation;
D O I
10.5220/0010893600003124
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The new era of image segmentation leveraging the power of Deep Neural Nets (DNNs) comes with a price tag: to train a neural network for pixel-wise segmentation, a large amount of training samples has to be manually labeled on pixel-precision. In this work, we address this by following an indirect solution. We build upon the advances of the Explainable AI (XAI) community and extract a pixel-wise binary segmentation from the output of the Layer-wise Relevance Propagation (LRP) explaining the decision of a classification network. We show that we achieve similar results compared to an established U-Net segmentation architecture, while the generation of the training data is significantly simplified. The proposed method can be trained in a weakly supervised fashion, as the training samples must be only labeled on image-level, at the same time enabling the output of a segmentation mask. This makes it especially applicable to a wider range of real applications where tedious pixel-level labelling is often not possible.
引用
收藏
页码:616 / 626
页数:11
相关论文
共 20 条
  • [1] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [2] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [3] Efficient graph-based image segmentation
    Felzenszwalb, PF
    Huttenlocher, DP
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2004, 59 (02) : 167 - 181
  • [4] Huang YB, 2018, IEEE INT CON AUTO SC, P612, DOI 10.1109/COASE.2018.8560423
  • [5] SNAKES - ACTIVE CONTOUR MODELS
    KASS, M
    WITKIN, A
    TERZOPOULOS, D
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 1987, 1 (04) : 321 - 331
  • [6] Kohlbrenner M., 2020, IEEE IJCNN, P1
  • [7] Automatic Analysis of Sewer Pipes Based on Unrolled Monocular Fisheye Images
    Kunzel, Johannes
    Werner, Thomas
    Moeller, Ronja
    Eisert, Peter
    Waschnewski, Jan
    Hilpert, Ralf
    [J]. 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 2019 - 2027
  • [8] Explainable AI: A Review of Machine Learning Interpretability Methods
    Linardatos, Pantelis
    Papastefanopoulos, Vasilis
    Kotsiantis, Sotiris
    [J]. ENTROPY, 2021, 23 (01) : 1 - 45
  • [9] CYP1A2 Genotype Polymorphism Influences the Effect of Caffeine on Anaerobic Performance in Trained Males
    Minaei, Shahin
    Jourkesh, Morteza
    Kreider, Richard B.
    Forbes, Scott C.
    Souza-Junior, Tacito P.
    McAnulty, Steven R.
    Kalman, Douglas
    [J]. INTERNATIONAL JOURNAL OF SPORT NUTRITION AND EXERCISE METABOLISM, 2022, 32 (01) : 16 - 21
  • [10] Montavon G., 2019, Explainable AI: interpreting, explaining and visualizing deep learning, P193, DOI DOI 10.1007/978-3-030-28954-610/FIGURES/5