Multiple Self-attention Network for Intracranial Vessel Segmentation

被引:3
作者
Li, Yang [1 ,2 ]
Ni, Jiajia [1 ,3 ]
Elazab, Ahmed [4 ]
Wu, Jianhuang [1 ]
机构
[1] Chinese Acad Sci, Lab Med Imaging & Digital Surg, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] HoHai Univ, Changzhou, Peoples R China
[4] Misr Higher Inst Commerce & Comp, Comp Sci Dept, Mansoura, Egypt
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
基金
中国国家自然科学基金;
关键词
deep learning; intracranial vessel segmentation; self-attention; sliced mosaic permutation;
D O I
10.1109/IJCNN52387.2021.9534214
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The capture of long-distance dependencies presents an efficient approach for feature learning and extraction. Especially the Transformers models that explore the dependencies within a long sequence, have swept the fields of natural language processing with their powerful performance. However, Transformers require extremely high computing power due to the huge amount of parameters, and it cannot achieve parallelism since it outputs tokens one by one. In this work, inspired by Transformers, we propose a self-attention encoder module (SAEM) that focuses on learning the connections between each position and all other positions in the image, which preserves the efficiency of Transformers but with less calculation and faster inference speed. In our SAEM, different group of internal feature maps within images captured by multiple scaled self-attentions are cascaded to generate global context information. Based on our SAEM, a lightweight and parallel network is designed for segmentation of intracranial blood vessels. Moreover, a data augmentation method is proposed, called sliced mosaic permutation, which makes the original image features richer and alleviates the problem of category imbalance, via cutting the original images with different scales and recombining randomly. We apply SAEM and sliced mosaic permutation to the task of intracranial blood vessel segmentation, the result shows that our method outperforms competitive methods in both visualization results and quantitative evaluation.
引用
收藏
页数:8
相关论文
共 50 条
[31]   A retinal vessel segmentation network with multiple-dimension attention and adaptive feature fusion [J].
Li J. ;
Gao G. ;
Yang L. ;
Liu Y. .
Computers in Biology and Medicine, 2024, 172
[32]   Point Cloud Classification Segmentation Combining Inter-Region Structure Relations and Self-Attention Edge Convolution Network [J].
Lyu, Zhiwei ;
Yang, Jiazhi ;
Zhou, Guoqing ;
Shen, Lu .
Computer Engineering and Applications, 2024, 60 (13) :171-179
[33]   Time-Aware and Feature Similarity Self-Attention in Vessel Fuel Consumption Prediction [J].
Park, Hyun Joon ;
Lee, Min Seok ;
Park, Dong Il ;
Han, Sung Won .
APPLIED SCIENCES-BASEL, 2021, 11 (23)
[34]   Fusing RGB and depth with Self-attention for Unseen Object Segmentation [J].
Lee, Joosoon ;
Back, Seunghyeok ;
Kim, Taewon ;
Shin, Sungho ;
Noh, Sangjun ;
Kang, Raeyoung ;
Kim, Jongwon ;
Lee, Kyoobin .
2021 21ST INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2021), 2021, :1599-1605
[35]   HYPERSPECTRAL TARGET DETECTION VIA DEEP MULTIPLE INSTANCE SELF-ATTENTION NEURAL NETWORK [J].
Wang, Xiuxiu ;
Chen, Xiaoying ;
Gou, Shuiping ;
Chen, Chao ;
Chen, Yuanbo ;
Tang, Xu ;
Jiao, Changzhe .
2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019), 2019, :2284-2287
[36]   Self-attention neural architecture search for semantic image segmentation [J].
Fan, Zhenkun ;
Hu, Guosheng ;
Sun, Xin ;
Wang, Gaige ;
Dong, Junyu ;
Su, Chi .
KNOWLEDGE-BASED SYSTEMS, 2022, 239
[37]   A framework for facial expression recognition using deep self-attention network [J].
Indolia S. ;
Nigam S. ;
Singh R. .
Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (07) :9543-9562
[38]   Cross self-attention network for 3D point cloud [J].
Wang, Gaihua ;
Zhai, Qianyu ;
Liu, Hong .
KNOWLEDGE-BASED SYSTEMS, 2022, 247
[39]   Fault diagnosis of reciprocating compressor based on group self-attention network [J].
Bao, Ganchao ;
Zhang, Hongli ;
Wei, Yuan ;
Gu, Dan ;
Liu, Shulin .
MEASUREMENT SCIENCE AND TECHNOLOGY, 2020, 31 (06)
[40]   DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting [J].
Huang, Siteng ;
Wang, Donglin ;
Wu, Xuehan ;
Tang, Ao .
PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, :2129-2132