Explainability for deep learning in mammography image quality assessment

被引:6
作者
Amanova, N. [1 ]
Martin, J. [1 ]
Elster, C. [1 ]
机构
[1] Phys Tech Bundesanstalt, Abbestr 2-12, D-10587 Berlin, Germany
来源
MACHINE LEARNING-SCIENCE AND TECHNOLOGY | 2022年 / 3卷 / 02期
关键词
deep learning; explainability; mammography image quality assessment; CONTRAST-DETAIL CURVES; NEURAL-NETWORKS; PHANTOM;
D O I
10.1088/2632-2153/ac7a03
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The application of deep learning has recently been proposed for the assessment of image quality in mammography. It was demonstrated in a proof-of-principle study that the proposed approach can be more efficient than currently applied automated conventional methods. However, in contrast to conventional methods, the deep learning approach has a black-box nature and, before it can be recommended for the routine use, it must be understood more thoroughly. For this purpose, we propose and apply a new explainability method: the oriented, modified integrated gradients (OMIG) method. The design of this method is inspired by the integrated gradientsmethod but adapted considerably to the use case at hand. To further enhance this method, an upsampling technique is developed that produces high-resolution explainability maps for the downsampled data used by the deep learning approach. Comparison with established explainability methods demonstrates that the proposed approach yields substantially more expressive and informative results for our specific use case. Application of the proposed explainability approach generally confirms the validity of the considered deep learning-based mammography image quality assessment (IQA) method. Specifically, it is demonstrated that the predicted image quality is based on a meaningful mapping that makes successful use of certain geometric structures of the images. In addition, the novel explainability method helps us to identify the parts of the employed phantom that have the largest impact on the predicted image quality, and to shed some light on cases in which the trained neural networks fail to work as expected. While tailored to assess a specific approach from deep learning for mammography IQA, the proposed explainability method could also become relevant in other, similar deep learning applications based on high-dimensional images.
引用
收藏
页数:17
相关论文
共 42 条
  • [1] Handling of uncertainty in medical data using machine learning and probability theory techniques: a review of 30 years (1991-2020)
    Alizadehsani, Roohallah
    Roshanzamir, Mohamad
    Hussain, Sadiq
    Khosravi, Abbas
    Koohestani, Afsaneh
    Zangooei, Mohammad Hossein
    Abdar, Moloud
    Beykikhoshk, Adham
    Shoeibi, Afshin
    Zare, Assef
    Panahiazar, Maryam
    Nahavandi, Saeid
    Srinivasan, Dipti
    Atiya, Amir F.
    Acharya, U. Rajendra
    [J]. ANNALS OF OPERATIONS RESEARCH, 2024, 339 (03) : 1077 - 1118
  • [2] Arnez F., 2020, ARXIV200615172
  • [3] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [4] Can a channelized Hotelling observer assess image quality in acquired mammographic images of an anthropomorphic breast phantom including image processing?
    Balta, C.
    Bouwman, R. W.
    Sechopoulos, I
    Broeders, M. J. M.
    Karssemeijer, N.
    van Engen, R. E.
    Veldkamp, W. J. H.
    [J]. MEDICAL PHYSICS, 2019, 46 (02) : 714 - 725
  • [5] A model observer study using acquired mammographic images of an anthropomorphic breast phantom
    Balta, Christiana
    Bouwman, Ramona W.
    Sechopoulos, Ioannis
    Broeders, Mireille J. M.
    Karssemeijer, Nico
    van Engen, Ruben E.
    Veldkamp, Wouter J. H.
    [J]. MEDICAL PHYSICS, 2018, 45 (02) : 655 - 665
  • [6] Comprehensive assessment of image quality in synthetic and digital mammography: a quantitative comparison
    Barca, Patrizio
    Lamastra, Rocco
    Aringhieri, Giacomo
    Tucciariello, Raffaele Maria
    Traino, Antonio
    Fantacci, Maria Evelina
    [J]. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE, 2019, 42 (04) : 1141 - 1152
  • [7] Burkart N, 2021, J ARTIF INTELL RES, V70, P245
  • [8] Deeply uncertain: comparing methods of uncertainty quantification in deep learning algorithms
    Caldeira, Joao
    Nord, Brian
    [J]. MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2021, 2 (01):
  • [9] Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders
    Chan, Ebenezer Jia Jun
    Najjar, Raymond P.
    Tang, Zhiqun
    Milea, Dan
    [J]. ASIA-PACIFIC JOURNAL OF OPHTHALMOLOGY, 2021, 10 (03): : 282 - 288
  • [10] de las Heras Gala H., 2015, NEW METHOD DOSIMETRY