FGAM: A pluggable light-weight attention module for medical image segmentation

被引:5
作者
Qiu, Zhongxi [1 ]
Hu, Yan [1 ]
Zhang, Jiayi [1 ]
Chen, Xiaoshan [1 ]
Liu, Jiang [1 ,2 ,3 ,4 ]
机构
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 51805, Guangdong, Peoples R China
[2] Chinese Acad Sci, Cixi Inst Biomed Engn, Beijing, Peoples R China
[3] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Guangdong Prov Key Lab Brain inspired Intelligent, Shenzhen 51805, Guangdong, Peoples R China
[4] Southern Univ Sci & Technol, Res Inst Trustworthy Autonomous Syst, Shenzhen 51805, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Medical image segmentation; Attention mechanism; Encoder-decoder network; OPTICAL COHERENCE TOMOGRAPHY; U-NET;
D O I
10.1016/j.compbiomed.2022.105628
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Medical image segmentation is fundamental for computer-aided diagnosis or surgery. Various attention modules are proposed to improve segmentation results, which exist some limitations for medical image segmentation, such as large computations, weak framework applicability, etc. To solve the problems, we propose a new attention module named FGAM, short for Feature Guided Attention Module, which is a simple but pluggable and effective module for medical image segmentation. The FGAM tries to dig out the feature representation ability in the encoder and decoder features. Specifically, the decoder shallow layer always contains abundant information, which is taken as a queryable feature dictionary in the FGAM. The module contains a parameter-free activator and can be deleted after various encoder-decoder networks' training. The efficacy of the FGAM is proved on various encoder-decoder models based on five datasets, including four publicly available datasets and one inhouse dataset.
引用
收藏
页数:11
相关论文
共 46 条
[1]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[2]   Lung Segmentation in Chest Radiographs Using Anatomical Atlases With Nonrigid Registration [J].
Candemir, Sema ;
Jaeger, Stefan ;
Palaniappan, Kannappan ;
Musco, Jonathan P. ;
Singh, Rahul K. ;
Xue, Zhiyun ;
Karargyris, Alexandros ;
Antani, Sameer ;
Thoma, George ;
McDonald, Clement J. .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2014, 33 (02) :577-590
[3]   Transformer Interpretability Beyond Attention Visualization [J].
Chefer, Hila ;
Gur, Shir ;
Wolf, Lior .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :782-791
[4]   CaMap: Camera-based Map Manipulation on Mobile Devices [J].
Chen, Liang ;
Chen, Dongyi .
PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING (CSAE2018), 2018,
[5]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[6]  
Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709
[7]  
Corne J., 2015, CHEST XRAY MADE EASY
[8]   Inverting Visual Representations with Convolutional Networks [J].
Dosovitskiy, Alexey ;
Brox, Thomas .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :4829-4837
[9]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149
[10]   Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation [J].
Fu, Xiaohang ;
Bi, Lei ;
Kumar, Ashnil ;
Fulham, Michael ;
Kim, Jinman .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (09) :3507-3516