Transformer Based Models for Unsupervised Anomaly Segmentation in Brain MR Images

被引:0
作者
Ghorbel, Ahmed [1 ]
Aldahdooh, Ahmed [1 ]
Albarqouni, Shadi [2 ,3 ,4 ]
Hamidouche, Wassim [1 ]
机构
[1] Univ Rennes, INSA Rennes, CNRS, IETR UMR 6164, 20 Av Buttes Coesmes, F-35700 Rennes, France
[2] Univ Hosp Bonn, Venusberg Campus 1, D-53127 Bonn, Germany
[3] Helmholtz Munich, Ingolstadter Landstr 1, D-85764 Neuherberg, Germany
[4] Tech Univ Munich, Boltzmannstr 3, D-85748 Garching, Germany
来源
BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, BRAINLES 2022 | 2023年 / 13769卷
关键词
Unsupervised Learning; Anomaly Segmentation; Deep-Learning; Transformers; Neuroimaging; ATTENTION;
D O I
10.1007/978-3-031-33842-7_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The quality of patient care associated with diagnostic radiology is proportionate to a physician's workload. Segmentation is a fundamental limiting precursor to both diagnostic and therapeutic procedures. Advances in machine learning (ML) aim to increase diagnostic efficiency by replacing a single application with generalized algorithms. The goal of unsupervised anomaly detection (UAD) is to identify potential anomalous regions unseen during training, where convolutional neural network (CNN) based autoencoders (AEs), and variational autoencoders (VAEs) are considered a de facto approach for reconstruction based-anomaly segmentation. The restricted receptive field in CNNs limits the CNN to model the global context. Hence, if the anomalous regions cover large parts of the image, the CNN-based AEs are not capable of bringing a semantic understanding of the image. Meanwhile, vision transformers (ViTs) have emerged as a competitive alternative to CNNs. It relies on the self-attention mechanism that can relate image patches to each other. We investigate in this paper Transformer's capabilities in building AEs for the reconstruction-based UAD task to reconstruct a coherent and more realistic image. We focus on anomaly segmentation for brain magnetic resonance imaging (MRI) and present five Transformer-based models while enabling segmentation performance comparable to or superior to state-of-the-art (SOTA) models. The source code is made publicly available on GitHub.
引用
收藏
页码:25 / 44
页数:20
相关论文
共 37 条
[1]   Fluid-attenuated inversion recovery magnetic resonance imaging detects cortical and juxtacortical multiple sclerosis lesions [J].
Bakshi, R ;
Ariyaratana, S ;
Benedict, RHB ;
Jacobs, L .
ARCHIVES OF NEUROLOGY, 2001, 58 (05) :742-748
[2]   Modeling Healthy Anatomy with Artificial Intelligence for Unsupervised Anomaly Detection in Brain MRI [J].
Baur, Christoph ;
Wiestler, Benedikt ;
Muehlau, Mark ;
Zimmer, Claus ;
Navab, Nassir ;
Albarqouni, Shadi .
RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2021, 3 (03)
[3]   Autoencoders for unsupervised anomaly segmentation in brain MR images: A comparative study [J].
Baur, Christoph ;
Denner, Stefan ;
Wiestler, Benedikt ;
Navab, Nassir ;
Albarqouni, Shadi .
MEDICAL IMAGE ANALYSIS, 2021, 69
[4]   Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images [J].
Baur, Christoph ;
Wiestler, Benedikt ;
Albarqouni, Shadi ;
Navab, Nassir .
BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, BRAINLES 2018, PT I, 2019, 11383 :161-169
[5]   Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction [J].
Bruno, Michael A. ;
Walker, Eric A. ;
Abujudeh, Hani H. .
RADIOGRAPHICS, 2015, 35 (06) :1668-1676
[6]  
Cao Hu, 2021, arXiv
[7]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[8]  
Chen XR, 2018, Arxiv, DOI arXiv:1806.04972
[9]  
Child R, 2019, Arxiv, DOI arXiv:1904.10509
[10]   Dynamic DETR: End-to-End Object Detection with Dynamic Attention [J].
Dai, Xiyang ;
Chen, Yinpeng ;
Yang, Jianwei ;
Zhang, Pengchuan ;
Yuan, Lu ;
Zhang, Lei .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :2968-2977