Uncertainty-aware prototypical learning for anomaly detection in medical images

被引:5
作者
Huang, Chao [1 ,2 ]
Shi, Yushu [2 ]
Zhang, Bob [1 ]
Lyu, Ke [3 ,4 ]
机构
[1] Univ Macau, Dept Comp & Informat Sci, PAMI Res Grp, Taipa 519000, Peoples R China
[2] Sun Yat Sen Univ, Sch Cyber Sci & Technol, Shenzhen Campus, Shenzhen 518107, Peoples R China
[3] Univ Chinese Acad Sci, Sch Engn Sci, Beijing 100049, Peoples R China
[4] Pengcheng Lab, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Anomalous object detection; Medical image analysis; Prototypical learning;
D O I
10.1016/j.neunet.2024.106284
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Anomalous object detection (AOD) in medical images aims to recognize the anomalous lesions, and is crucial for early clinical diagnosis of various cancers. However, it is a difficult task because of two reasons: (1) the diversity of the anomalous lesions and (2) the ambiguity of the boundary between anomalous lesions and their normal surroundings. Unlike existing single -modality AOD models based on deterministic mapping, we constructed a probabilistic and deterministic AOD model. Specifically, we designed an uncertaintyaware prototype learning framework, which considers the diversity and ambiguity of anomalous lesions. A prototypical learning transformer (Pformer) is established to extract and store the prototype features of different anomalous lesions. Moreover, Bayesian neural uncertainty quantizer, a probabilistic model, is designed to model the distributions over the outputs of the model to measure the uncertainty of the model's detection results for each pixel. Essentially, the uncertainty of the model's anomaly detection result for a pixel can reflect the anomalous ambiguity of this pixel. Furthermore, an uncertainty -guided reasoning transformer (Uformer) is devised to employ the anomalous ambiguity, encouraging the proposed model to focus on pixels with high uncertainty. Notably, prototypical representations stored in Pformer are also utilized in anomaly reasoning that enables the model to perceive diversities of the anomalous objects. Extensive experiments on five benchmark datasets demonstrate the superiority of our proposed method. The source code will be available in github.com/umchaohuang/UPformer.
引用
收藏
页数:10
相关论文
共 47 条
[21]   Learning Normal Dynamics in Videos with Meta Prototype Network [J].
Lv, Hui ;
Chen, Chen ;
Cui, Zhen ;
Xu, Chunyan ;
Li, Yong ;
Yang, Jian .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15420-15429
[22]   Automated Polyp Detection in Colon Capsule Endoscopy [J].
Mamonov, Alexander V. ;
Figueiredo, Isabel N. ;
Figueiredo, Pedro N. ;
Tsai, Yen-Hsi Richard .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2014, 33 (07) :1488-1502
[23]   How to Evaluate Foreground Maps? [J].
Margolin, Ran ;
Zelnik-Manor, Lihi ;
Tal, Ayellet .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :248-255
[24]   V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation [J].
Milletari, Fausto ;
Navab, Nassir ;
Ahmadi, Seyed-Ahmad .
PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2016, :565-571
[25]   Learning Memory-guided Normality for Anomaly Detection [J].
Park, Hyunjong ;
Noh, Jongyoun ;
Ham, Bumsub .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :14360-14369
[26]  
Qiu Z H, 2021, arXiv
[27]   PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation [J].
Reiss, Tal ;
Cohen, Niv ;
Bergman, Liron ;
Hoshen, Yedid .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :2805-2813
[28]  
Ren Z., 2023, IEEE Open J. Eng. Med. Biol.
[29]   Weakly supervised machine learning [J].
Ren, Zeyu ;
Wang, Shuihua ;
Zhang, Yudong .
CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2023, 8 (03) :549-580
[30]   U-Net: Convolutional Networks for Biomedical Image Segmentation [J].
Ronneberger, Olaf ;
Fischer, Philipp ;
Brox, Thomas .
MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, PT III, 2015, 9351 :234-241