Explainability of an AI-Based Breast Cancer Risk Prediction Tool

被引:0
作者
Ellis, Sam [1 ]
Gomes, Sandra [1 ]
Trumble, Matthew [1 ]
Halling-Brown, Mark D. [1 ,2 ]
Young, Kenneth C. [3 ,4 ]
Warren, Lucy M. [1 ]
机构
[1] Royal Surrey NHS Fdn Trust, Dept Sci Comp, Guildford, Surrey, England
[2] Univ Surrey, Ctr Vis Speech & Signal Proc, Guildford, Surrey, England
[3] Royal Surrey NHS Fdn Trust, Natl Coordinating Ctr Phys Mammog, Guildford, Surrey, England
[4] Univ Surrey, Dept Phys, Guildford, Surrey, England
来源
17TH INTERNATIONAL WORKSHOP ON BREAST IMAGING, IWBI 2024 | 2024年 / 13174卷
关键词
breast cancer; risk prediction; screening; artificial intelligence; explainability;
D O I
10.1117/12.3026843
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Background: Recent AI breast cancer risk prediction models are difficult to interpret, limiting their clinical utility. In this work we explore the explainability of an AI-based risk-prediction model by examining performance with respect to different characteristics of the future cancer. In particular, saliency maps were used to examine how often the model focused on regions coinciding with future lesions and assess the characteristics of future lesions that were most likely to coincide with AI-assigned high-risk regions. Methods: An AI model for breast cancer risk prediction was previously trained on the UK OPTIMAM dataset, achieving an AUROC of 0.70 for the task of 3-year risk prediction. Re-visiting the test set used to evaluate this model (n=31351 examinations), we obtained additional information about the future cancer cases (n=1053), including future cancer type (invasive/in-situ) and grade, and future lesion visual characteristics. Patient-level risk was compared across different cancer types and grades, and saliency maps were generated to perform a localisation study. Results: The AI tool performed similarly for future invasive and in-situ disease, with no significant difference in risk score observed. Similarly, risk scores did not vary significantly with future cancer grade. Saliency map analysis showed that the AI-indicated high-risk regions coincided more often with the location of future obvious lesions or lesions with calcifications. Conclusions: The results in this work provide insights into the decision-making process of the AI risk prediction tool. Further work is required to explore additional lesion characteristics and further validate these findings.
引用
收藏
页数:6
相关论文
共 7 条
[1]  
Ellis S., Rad. Artif. Intell.
[2]   European validation of an image-derived AI-based short-term risk model for individualized breast cancer screening-a nested case-control study [J].
Eriksson, Mikael ;
Roman, Marta ;
Graewingholt, Axel ;
Castells, Xavier ;
Nitrosi, Andrea ;
Pattacini, Pierpaolo ;
Heywang-Koebrunner, Sylvia ;
Rossi, Paolo G. .
LANCET REGIONAL HEALTH-EUROPE, 2024, 37
[3]   OPTIMAM Mammography Image Database: A Large-Scale Resource of Mammography Images and Clinical Data [J].
Halling-Brown, Mark D. ;
Warren, Lucy M. ;
Ward, Dominic ;
Lewis, Emma ;
Mackenzie, Alistair ;
Wallis, Matthew G. ;
Wilkinson, Louise S. ;
Given-Wilson, Rosalind M. ;
McAvinchey, Rita ;
Young, Kenneth C. .
RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2021, 3 (01)
[4]   ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design [J].
Ma, Ningning ;
Zhang, Xiangyu ;
Zheng, Hai-Tao ;
Sun, Jian .
COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 :122-138
[5]   Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization [J].
Selvaraju, Ramprasaath R. ;
Cogswell, Michael ;
Das, Abhishek ;
Vedantam, Ramakrishna ;
Parikh, Devi ;
Batra, Dhruv .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :618-626
[6]  
Simplesize, ABOUT US
[7]   Optimizing risk-based breast cancer screening policies with reinforcement learning [J].
Yala, Adam ;
Mikhael, Peter G. ;
Lehman, Constance ;
Lin, Gigin G. ;
Strand, Fredrik ;
Wan, Yung-Liang ;
Hughes, Kevin ;
Satuluru, Siddharth ;
Kim, Thomas ;
Banerjee, Imon ;
Gichoya, Judy ;
Trivedi, Hari ;
Barzilay, Regina .
NATURE MEDICINE, 2022, 28 (01) :136-+