Explainability for deep learning in mammography image quality assessment

被引:6
作者
Amanova, N. [1 ]
Martin, J. [1 ]
Elster, C. [1 ]
机构
[1] Phys Tech Bundesanstalt, Abbestr 2-12, D-10587 Berlin, Germany
来源
MACHINE LEARNING-SCIENCE AND TECHNOLOGY | 2022年 / 3卷 / 02期
关键词
deep learning; explainability; mammography image quality assessment; CONTRAST-DETAIL CURVES; NEURAL-NETWORKS; PHANTOM;
D O I
10.1088/2632-2153/ac7a03
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The application of deep learning has recently been proposed for the assessment of image quality in mammography. It was demonstrated in a proof-of-principle study that the proposed approach can be more efficient than currently applied automated conventional methods. However, in contrast to conventional methods, the deep learning approach has a black-box nature and, before it can be recommended for the routine use, it must be understood more thoroughly. For this purpose, we propose and apply a new explainability method: the oriented, modified integrated gradients (OMIG) method. The design of this method is inspired by the integrated gradientsmethod but adapted considerably to the use case at hand. To further enhance this method, an upsampling technique is developed that produces high-resolution explainability maps for the downsampled data used by the deep learning approach. Comparison with established explainability methods demonstrates that the proposed approach yields substantially more expressive and informative results for our specific use case. Application of the proposed explainability approach generally confirms the validity of the considered deep learning-based mammography image quality assessment (IQA) method. Specifically, it is demonstrated that the predicted image quality is based on a meaningful mapping that makes successful use of certain geometric structures of the images. In addition, the novel explainability method helps us to identify the parts of the employed phantom that have the largest impact on the predicted image quality, and to shed some light on cases in which the trained neural networks fail to work as expected. While tailored to assess a specific approach from deep learning for mammography IQA, the proposed explainability method could also become relevant in other, similar deep learning applications based on high-dimensional images.
引用
收藏
页数:17
相关论文
共 42 条
  • [41] Visualizing and Understanding Convolutional Networks
    Zeiler, Matthew D.
    Fergus, Rob
    [J]. COMPUTER VISION - ECCV 2014, PT I, 2014, 8689 : 818 - 833
  • [42] CNN-Based Medical Ultrasound Image Quality Assessment
    Zhang, Siyuan
    Wang, Yifan
    Jiang, Jiayao
    Dong, Jingxian
    Yi, Weiwei
    Hou, Wenguang
    [J]. COMPLEXITY, 2021, 2021