Multiple Self-attention Network for Intracranial Vessel Segmentation

被引:3
作者
Li, Yang [1 ,2 ]
Ni, Jiajia [1 ,3 ]
Elazab, Ahmed [4 ]
Wu, Jianhuang [1 ]
机构
[1] Chinese Acad Sci, Lab Med Imaging & Digital Surg, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] HoHai Univ, Changzhou, Peoples R China
[4] Misr Higher Inst Commerce & Comp, Comp Sci Dept, Mansoura, Egypt
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
基金
中国国家自然科学基金;
关键词
deep learning; intracranial vessel segmentation; self-attention; sliced mosaic permutation;
D O I
10.1109/IJCNN52387.2021.9534214
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The capture of long-distance dependencies presents an efficient approach for feature learning and extraction. Especially the Transformers models that explore the dependencies within a long sequence, have swept the fields of natural language processing with their powerful performance. However, Transformers require extremely high computing power due to the huge amount of parameters, and it cannot achieve parallelism since it outputs tokens one by one. In this work, inspired by Transformers, we propose a self-attention encoder module (SAEM) that focuses on learning the connections between each position and all other positions in the image, which preserves the efficiency of Transformers but with less calculation and faster inference speed. In our SAEM, different group of internal feature maps within images captured by multiple scaled self-attentions are cascaded to generate global context information. Based on our SAEM, a lightweight and parallel network is designed for segmentation of intracranial blood vessels. Moreover, a data augmentation method is proposed, called sliced mosaic permutation, which makes the original image features richer and alleviates the problem of category imbalance, via cutting the original images with different scales and recombining randomly. We apply SAEM and sliced mosaic permutation to the task of intracranial blood vessel segmentation, the result shows that our method outperforms competitive methods in both visualization results and quantitative evaluation.
引用
收藏
页数:8
相关论文
共 50 条
[41]   Deep Gated Neural Network With Self-Attention Mechanism for Survival Analysis [J].
Yang, Xulin ;
Qiu, Hang .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (04) :2945-2956
[42]   Solar irradiance prediction based on self-attention recursive model network [J].
Kang, Ting ;
Wang, Huaizhi ;
Wu, Ting ;
Peng, Jianchun ;
Jiang, Hui .
FRONTIERS IN ENERGY RESEARCH, 2022, 10
[43]   Prediction of Material Properties of Inorganic Compounds Using Self-Attention Network [J].
Noda K. ;
Takahashi H. ;
Tsuda K. ;
Hiroshima M. .
Transactions of the Japanese Society for Artificial Intelligence, 2023, 38 (02)
[44]   A Deformable Network with Attention Mechanism for Retinal Vessel Segmentation [J].
Zhu, Xiaolong ;
Li, Wenjian ;
Zhang, Weihang ;
Li, Dongwei ;
Li, Huiqi .
Journal of Beijing Institute of Technology (English Edition), 2024, 33 (03) :186-193
[45]   Mesh Segmentation for Individual Teeth Based on Two-Stream GCN With Self-Attention [J].
Liu, Shi-Jian ;
Kang, Chao-Ming ;
Huang, Feng-Hua ;
Zou, Zheng .
IEEE ACCESS, 2024, 12 :76735-76743
[46]   TESANet: Self-attention network for olfactory EEG classification [J].
Tong, Chengxuan ;
Ding, Yi ;
Liang, Kevin Lim Jun ;
Zhang, Zhuo ;
Zhang, Haihong ;
Guan, Cuntai .
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
[47]   Self-attention Based Collaborative Neural Network for Recommendation [J].
Ma, Shengchao ;
Zhu, Jinghua .
WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2019, 2019, 11604 :235-246
[48]   SELF-ATTENTION GENERATIVE ADVERSARIAL NETWORK FOR SPEECH ENHANCEMENT [J].
Huy Phan ;
Nguyen, Huy Le ;
Chen, Oliver Y. ;
Koch, Philipp ;
Duong, Ngoc Q. K. ;
McLoughlin, Ian ;
Mertins, Alfred .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :7103-7107
[49]   Diversifying Search Results using Self-Attention Network [J].
Qin, Xubo ;
Dou, Zhicheng ;
Wen, Ji-Rong .
CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, :1265-1274
[50]   TSNet: Three-Stream Self-Attention Network for RGB-D Indoor Semantic Segmentation [J].
Zhou, Wujie ;
Yuan, Jianzhong ;
Lei, Jingsheng ;
Luo, Ting .
IEEE INTELLIGENT SYSTEMS, 2021, 36 (04) :73-78