HT-Net: hierarchical context-attention transformer network for medical ct image segmentation

被引:30
作者
Ma, Mingjun [1 ]
Xia, Haiying [1 ]
Tan, Yumei [2 ]
Li, Haisheng [1 ]
Song, Shuxiang [1 ]
机构
[1] Guangxi Normal Univ, Coll Elect Engn, Guilin 541004, Peoples R China
[2] Guangxi Normal Univ, Sch Comp Sci & Engn, Guilin 541004, Peoples R China
基金
中国国家自然科学基金;
关键词
Transformer; Medical image segmentation; Context-attention;
D O I
10.1007/s10489-021-03010-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have been a prevailing technique in the field of medical CT image processing. Although encoder-decoder CNNs exploit locality for efficiency, they cannot adequately model remote pixel relationships. Recent works prove it possible to stack self-attention or transformer layers to effectively learn long-range dependencies. Transformers have been extended to computer vision tasks by creating and treating image patches as embeddings. However, transformer-based architectures lack global semantic information interaction and require large-scale dataset for training, making it difficult to effectively train with limited data samples. To address these issues, we propose a hierarchical context-attention transformer network (HT-Net), which integrates the multi-scale, transformer and hierarchical context extraction modules in skip-connections. The multi-scale module captures richer CT semantic information, enabling transformers to better encode feature maps of tokenized image patches from different stages of CNN as input attention sequences.The hierarchical context attention module complements global information and re-weights the pixels to capture semantic context. Extensive experiments on three datasets demonstrate that the proposed HT-Net outperforms state-of-the-art approaches.
引用
收藏
页码:10692 / 10705
页数:14
相关论文
共 47 条
[1]  
Alom MZ, 2018, PROC NAECON IEEE NAT, P228, DOI 10.1109/NAECON.2018.8556686
[2]   Noise-estimation-based anisotropic diffusion approach for retinal blood vessel segmentation [J].
Ben Abdallah, Mariem ;
Azar, Ahmad Taher ;
Guedri, Hichem ;
Malek, Jihene ;
Belmabrouk, Hafedh .
NEURAL COMPUTING & APPLICATIONS, 2018, 29 (08) :159-180
[3]  
Cao H, 2021, ARXIV210505537
[4]  
Chen Jieneng, 2021, ARXIV210204306
[5]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[6]   SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for Image Captioning [J].
Chen, Long ;
Zhang, Hanwang ;
Xiao, Jun ;
Nie, Liqiang ;
Shao, Jian ;
Liu, Wei ;
Chua, Tat-Seng .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6298-6306
[7]   TransMed: Transformers Advance Multi-Modal Medical Image Classification [J].
Dai, Yin ;
Gao, Yifan ;
Liu, Fayu .
DIAGNOSTICS, 2021, 11 (08)
[8]  
Deng-Ping Fan, 2020, Medical Image Computing and Computer Assisted Intervention - MICCAI 2020. 23rd International Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12266), P263, DOI 10.1007/978-3-030-59725-2_26
[9]   Segmentation of the Proximal Femur from MR Images using Deep Convolutional Neural Networks [J].
Deniz, Cem M. ;
Xiang, Siyuan ;
Hallyburton, R. Spencer ;
Welbeck, Arakua ;
Babb, James S. ;
Honig, Stephen ;
Cho, Kyunghyun ;
Chang, Gregory .
SCIENTIFIC REPORTS, 2018, 8
[10]  
Devlin J, 2019, ARXIV210305940