Multiple Self-attention Network for Intracranial Vessel Segmentation

被引:2
|
作者
Li, Yang [1 ,2 ]
Ni, Jiajia [1 ,3 ]
Elazab, Ahmed [4 ]
Wu, Jianhuang [1 ]
机构
[1] Chinese Acad Sci, Lab Med Imaging & Digital Surg, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] HoHai Univ, Changzhou, Peoples R China
[4] Misr Higher Inst Commerce & Comp, Comp Sci Dept, Mansoura, Egypt
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
基金
中国国家自然科学基金;
关键词
deep learning; intracranial vessel segmentation; self-attention; sliced mosaic permutation;
D O I
10.1109/IJCNN52387.2021.9534214
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The capture of long-distance dependencies presents an efficient approach for feature learning and extraction. Especially the Transformers models that explore the dependencies within a long sequence, have swept the fields of natural language processing with their powerful performance. However, Transformers require extremely high computing power due to the huge amount of parameters, and it cannot achieve parallelism since it outputs tokens one by one. In this work, inspired by Transformers, we propose a self-attention encoder module (SAEM) that focuses on learning the connections between each position and all other positions in the image, which preserves the efficiency of Transformers but with less calculation and faster inference speed. In our SAEM, different group of internal feature maps within images captured by multiple scaled self-attentions are cascaded to generate global context information. Based on our SAEM, a lightweight and parallel network is designed for segmentation of intracranial blood vessels. Moreover, a data augmentation method is proposed, called sliced mosaic permutation, which makes the original image features richer and alleviates the problem of category imbalance, via cutting the original images with different scales and recombining randomly. We apply SAEM and sliced mosaic permutation to the task of intracranial blood vessel segmentation, the result shows that our method outperforms competitive methods in both visualization results and quantitative evaluation.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Lightweight Self-Attention Network for Semantic Segmentation
    Zhou, Yan
    Zhou, Haibin
    Li, Nanjun
    Li, Jianxun
    Wang, Dongli
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [2] Encoding-decoding Network With Pyramid Self-attention Module For Retinal Vessel Segmentation
    Wu, Cong-Zhong
    Sun, Jun
    Wang, Jing
    Xu, Liang-Feng
    Zhan, Shu
    INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2021, 18 (06) : 973 - 980
  • [3] Encoding-decoding Network With Pyramid Self-attention Module For Retinal Vessel Segmentation
    Cong-Zhong Wu
    Jun Sun
    Jing Wang
    Liang-Feng Xu
    Shu Zhan
    International Journal of Automation and Computing, 2021, 18 : 973 - 980
  • [4] Encoding-decoding Network With Pyramid Self-attention Module for Retinal Vessel Segmentation
    Cong-Zhong Wu
    Jun Sun
    Jing Wang
    Liang-Feng Xu
    Shu Zhan
    International Journal of Automation and Computing, 2021, (06) : 973 - 980
  • [5] Encoding-decoding Network With Pyramid Self-attention Module for Retinal Vessel Segmentation
    CongZhong Wu
    Jun Sun
    Jing Wang
    LiangFeng Xu
    Shu Zhan
    International Journal of Automation and Computing, 2021, 18 (06) : 973 - 980
  • [6] Retinal Vessel Segmentation Based on Self-Attention Feature Selection
    Jiang, Ligang
    Li, Wen
    Xiong, Zhiming
    Yuan, Guohui
    Huang, Chongjun
    Xu, Wenhao
    Zhou, Lu
    Qu, Chao
    Wang, Zhuoran
    Tong, Yuhua
    ELECTRONICS, 2024, 13 (17)
  • [7] CSAUNet: A cascade self-attention u-shaped network for precise fundus vessel segmentation
    Huang, Zheng
    Sun, Ming
    Liu, Yuxin
    Wu, Jiajun
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 75
  • [8] Self-attention feature fusion network for semantic segmentation
    Zhou, Zhen
    Zhou, Yan
    Wang, Dongli
    Mu, Jinzhen
    Zhou, Haibin
    NEUROCOMPUTING, 2021, 453 : 50 - 59
  • [9] Investigating Self-Attention Network for Chinese Word Segmentation
    Gan, Leilei
    Zhang, Yue
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 : 2933 - 2941
  • [10] Progressively Normalized Self-Attention Network for Video Polyp Segmentation
    Ji, Ge-Peng
    Chou, Yu-Cheng
    Fan, Deng-Ping
    Chen, Geng
    Fu, Huazhu
    Jha, Debesh
    Shao, Ling
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT I, 2021, 12901 : 142 - 152