DCCAT: Dual-Coordinate Cross-Attention Transformer for thrombus segmentation on coronary OCT

被引:2
|
作者
Chu, Miao [1 ,2 ,3 ]
De Maria, Giovanni Luigi [2 ,3 ]
Dai, Ruobing [1 ]
Benenati, Stefano [2 ,3 ,5 ]
Yu, Wei [1 ]
Zhong, Jiaxin [1 ,6 ]
Kotronias, Rafail [2 ,3 ,4 ]
Walsh, Jason [2 ,3 ,4 ]
Andreaggi, Stefano [2 ,7 ]
Zuccarelli, Vittorio [2 ]
Chai, Jason [2 ,3 ]
Channon, Keith [2 ,3 ,4 ]
Banning, Adrian [2 ,3 ,4 ]
Tu, Shengxian [1 ,3 ]
机构
[1] Shanghai Jiao Tong Univ, Biomed Instrument Inst, Sch Biomed Engn, Shanghai, Peoples R China
[2] Oxford Univ Hosp NHS Trust, Oxford Heart Ctr, Oxford, England
[3] Univ Oxford, Radcliffe Dept Med, Div Cardiovasc Med, Oxford, England
[4] Oxford Biomed Res Ctr, Natl Inst Hlth Res, Oxford, England
[5] Univ Genoa, Genoa, Italy
[6] Fujian Med Univ, Union Hosp, Dept Cardiol, Fuzhou, Fujian, Peoples R China
[7] Univ Verona, Dept Med, Div Cardiol, Verona, Italy
基金
中国国家自然科学基金;
关键词
Acute coronary syndromes; Optical coherence tomography; Thrombus segmentation; Cross-attention; OPTICAL COHERENCE TOMOGRAPHY; PLAQUE EROSION; NEURAL-NETWORK; DIAGNOSIS;
D O I
10.1016/j.media.2024.103265
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Acute coronary syndromes (ACS) are one of the leading causes of mortality worldwide, with atherosclerotic plaque rupture and subsequent thrombus formation as the main underlying substrate. Thrombus burden evaluation is important for tailoring treatment therapy and predicting prognosis. Coronary optical coherence tomography (OCT) enables in-vivo visualization of thrombus that cannot otherwise be achieved by other image modalities. However, automatic quantification of thrombus on OCT has not been implemented. The main challenges are due to the variation in location, size and irregularities of thrombus in addition to the small data set. In this paper, we propose a novel dual-coordinate cross-attention transformer network, termed DCCAT, to overcome the above challenges and achieve the first automatic segmentation of thrombus on OCT. Imaging features from both Cartesian and polar coordinates are encoded and fused based on long-range correspondence via multi-head cross-attention mechanism. The dual-coordinate cross-attention block is hierarchically stacked amid convolutional layers at multiple levels, allowing comprehensive feature enhancement. The model was developed based on 5,649 OCT frames from 339 patients and tested using independent external OCT data from 548 frames of 52 patients. DCCAT achieved Dice similarity score (DSC) of 0.706 in segmenting thrombus, which is significantly higher than the CNN-based (0.656) and Transformer-based (0.584) models. We prove that the additional input of polar image not only leverages discriminative features from another coordinate but also improves model robustness for geometrical transformation.Experiment results show that DCCAT achieves competitive performance with only 10% of the total data, highlighting its data efficiency. The proposed dual- coordinate cross-attention design can be easily integrated into other developed Transformer models to boost performance.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Dual Cross-Attention for medical image segmentation
    Ates, Gorkem Can
    Mohan, Prasoon
    Celik, Emrah
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 126
  • [2] Structure-Guided Cross-Attention Network for Cross-Domain OCT Fluid Segmentation
    He, Xingxin
    Zhong, Zhun
    Fang, Leyuan
    He, Min
    Sebe, Nicu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 309 - 320
  • [3] Deformable Cross-Attention Transformer for Medical Image Registration
    Chen, Junyu
    Liu, Yihao
    He, Yufan
    Du, Yong
    MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2023, PT I, 2024, 14348 : 115 - 125
  • [4] DTCA: Dual-Branch Transformer with Cross-Attention for EEG and Eye Movement Data Fusion
    Zhang, Xiaoshan
    Shi, Enze
    Yu, Sigang
    Zhang, Shu
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT II, 2024, 15002 : 141 - 151
  • [5] Dual-Branch Cross-Attention Network for Micro-Expression Recognition with Transformer Variants
    Xie, Zhihua
    Zhao, Chuwei
    ELECTRONICS, 2024, 13 (02)
  • [6] Unsupervised Domain Adaptation via Bidirectional Cross-Attention Transformer
    Wang, Xiyu
    Guo, Pengxin
    Zhang, Yu
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 309 - 325
  • [7] Learning Cross-Attention Point Transformer With Global Porous Sampling
    Duan, Yueqi
    Sun, Haowen
    Yan, Juncheng
    Lu, Jiwen
    Zhou, Jie
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6283 - 6297
  • [8] CAFE-Net: Cross-Attention and Feature Exploration Network for polyp segmentation
    Liu, Guoqi
    Yao, Sheng
    Liu, Dong
    Chang, Baofang
    Chen, Zongyu
    Wang, Jiajia
    Wei, Jiangqi
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [9] CrossFormer: Multi-scale cross-attention for polyp segmentation
    Chen, Lifang
    Ge, Hongze
    Li, Jiawei
    IET IMAGE PROCESSING, 2023, 17 (12) : 3441 - 3452
  • [10] Spatial-Spectral Transformer With Cross-Attention for Hyperspectral Image Classification
    Peng, Yishu
    Zhang, Yuwen
    Tu, Bing
    Li, Qianming
    Li, Wujing
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60