Breast cancer lesion segmentation based on co-learning feature fusion and Transformer

被引:0
作者
Zhai Y. [1 ]
Chen Z. [1 ]
Shao D. [2 ]
机构
[1] School of Computer Science and Engineering, Shenyang Jianzhu University, Shenyang
[2] Department of Nuclear Medicine, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou
来源
Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering | 2024年 / 41卷 / 02期
关键词
Breast cancer lesion segmentation; Co-learning feature fusion; Dual-path U-Net; Positron emission tomography and computed tomography; Transformer;
D O I
10.7507/1001-5515.202306063
中图分类号
学科分类号
摘要
结合正电子发射断层扫描(PET)和计算机断层扫描(CT)的PET/CT成像技术是目前较先进的影像学检查手段,主要用于肿瘤筛查、良恶性鉴别诊断和分期分级。本文提出了一种基于PET/CT双模态图像的乳腺癌病灶分割方法,设计了一种双路U型网络框架,主要包括编码器模块、特征融合模块和解码器模块三个组成部分。其中,编码器模块使用传统的卷积进行单模态图像特征提取;特征融合模块采用协同学习特征融合技术,并使用转换器(Transformer)提取融合图的全局特征;解码器模块主要采用多层感知机以实现病灶分割。本文实验使用实际临床PET/CT数据评估算法的有效性,实验结果表明乳腺癌病灶分割的精确率、召回率和准确率分别达到95.67%、97.58%和96.16%,均优于基线算法。研究结果证明了本文实验设计的卷积与Transformer相结合的单、双模态特征提取方式的合理性,为多模态医学图像分割或分类等任务的特征提取方法提供参考。.; The PET/CT imaging technology combining positron emission tomography (PET) and computed tomography (CT) is the most advanced imaging examination method currently, and is mainly used for tumor screening, differential diagnosis of benign and malignant tumors, staging and grading. This paper proposes a method for breast cancer lesion segmentation based on PET/CT bimodal images, and designs a dual-path U-Net framework, which mainly includes three modules: encoder module, feature fusion module and decoder module. Among them, the encoder module uses traditional convolution for feature extraction of single mode image; The feature fusion module adopts collaborative learning feature fusion technology and uses Transformer to extract the global features of the fusion image; The decoder module mainly uses multi-layer perceptron to achieve lesion segmentation. This experiment uses actual clinical PET/CT data to evaluate the effectiveness of the algorithm. The experimental results show that the accuracy, recall and accuracy of breast cancer lesion segmentation are 95.67%, 97.58% and 96.16%, respectively, which are better than the baseline algorithm. Therefore, it proves the rationality of the single and bimodal feature extraction method combining convolution and Transformer in the experimental design of this article, and provides reference for feature extraction methods for tasks such as multimodal medical image segmentation or classification.
引用
收藏
页码:237 / 245
页数:8
相关论文
empty
未找到相关数据