Sparse Coding Inspired LSTM and Self-Attention Integration for Medical Image Segmentation

被引:0
作者
Ji, Zexuan [1 ]
Ye, Shunlong [1 ]
Ma, Xiao [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
基金
美国国家科学基金会;
关键词
Sparse coding; contextual module; LSTM; self-attention; medical image segmentation; NETWORK; 2D; CLASSIFICATION;
D O I
10.1109/TIP.2024.3482189
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate and automatic segmentation of medical images plays an essential role in clinical diagnosis and analysis. It has been established that integrating contextual relationships substantially enhances the representational ability of neural networks. Conventionally, Long Short-Term Memory (LSTM) and Self-Attention (SA) mechanisms have been recognized for their proficiency in capturing global dependencies within data. However, these mechanisms have typically been viewed as distinct modules without a direct linkage. This paper presents the integration of LSTM design with SA sparse coding as a key innovation. It uses linear combinations of LSTM states for SA's query, key, and value (QKV) matrices to leverage LSTM's capability for state compression and historical data retention. This approach aims to rectify the shortcomings of conventional sparse coding methods that overlook temporal information, thereby enhancing SA's ability to do sparse coding and capture global dependencies. Building upon this premise, we introduce two innovative modules that weave the SA matrix into the LSTM state design in distinct manners, enabling LSTM to more adeptly model global dependencies and meld seamlessly with SA without accruing extra computational demands. Both modules are separately embedded into the U-shaped convolutional neural network architecture for handling both 2D and 3D medical images. Experimental evaluations on downstream medical image segmentation tasks reveal that our proposed modules not only excel on four extensively utilized datasets across various baselines but also enhance prediction accuracy, even on baselines that have already incorporated contextual modules. Code is available at https://github.com/yeshunlong/SALSTM.
引用
收藏
页码:6098 / 6113
页数:16
相关论文
共 50 条
  • [21] SEMANTIC SEGMENTATION OF HIGH-RESOLUTION REMOTE SENSING IMAGES BASED ON SPARSE SELF-ATTENTION
    Sun, Li
    Zou, Huanxin
    Wei, Juan
    Li, Meilin
    Cao, Xu
    He, Shitian
    Liu, Shuo
    2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 3492 - 3495
  • [22] ?-net: Dual supervised medical image segmentation with multi-dimensional self-attention and diversely-connected multi-scale convolution
    Xu, Zhenghua
    Liu, Shijie
    Yuan, Di
    Wang, Lei
    Chen, Junyang
    Lukasiewicz, Thomas
    Fu, Zhigang
    Zhang, Rui
    NEUROCOMPUTING, 2022, 500 : 177 - 190
  • [23] Grain protein function prediction based on self-attention mechanism and bidirectional LSTM
    Liu, Jing
    Tang, Xinghua
    Guan, Xiao
    BRIEFINGS IN BIOINFORMATICS, 2023, 24 (01)
  • [24] Prostate MR Image Segmentation With Self-Attention Adversarial Training Based on Wasserstein Distance
    Su, Chengwei
    Huang, Renxiang
    Liu, Chang
    Yin, Tailang
    Du, Bo
    IEEE ACCESS, 2019, 7 : 184276 - 184284
  • [25] An Aerial Target Recognition Algorithm Based on Self-Attention and LSTM
    Liang, Futai
    Chen, Xin
    He, Song
    Song, Zihao
    Lu, Hao
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 81 (01): : 1101 - 1121
  • [26] Image Reconstruction by Sparse Coding and Selective Attention
    Li, Zhiqing
    Shi, Zhiping
    Li, Zhixin
    Shi, Zhongzhi
    PROCEEDINGS OF THE 2009 2ND INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, VOLS 1-9, 2009, : 691 - 695
  • [27] Improving 3-D Medical Image Segmentation at Boundary Regions Using Local Self-Attention and Global Volume Mixing
    Abdul Kareem D.N.
    Fiaz M.
    Novershtern N.
    Hanna J.
    Cholakkal H.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (06): : 3233 - 3244
  • [28] Self-attention random forest for breast cancer image classification
    Li, Jia
    Shi, Jingwen
    Chen, Jianrong
    Du, Ziqi
    Huang, Li
    FRONTIERS IN ONCOLOGY, 2023, 13
  • [29] Self-attention CNN for retinal layer segmentation in OCT
    Cao, Guogang
    Wu, Yan
    Peng, Zeyu
    Zhou, Zhilin
    Dai, Cuixia
    BIOMEDICAL OPTICS EXPRESS, 2024, 15 (03) : 1605 - 1617
  • [30] Multiple Self-attention Network for Intracranial Vessel Segmentation
    Li, Yang
    Ni, Jiajia
    Elazab, Ahmed
    Wu, Jianhuang
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,