RRANet: A Reverse Region-Aware Network with Edge Difference for Accurate Breast Tumor Segmentation in Ultrasound Images

被引:0
作者
Chen, Zhengyu [1 ]
Song, Xiaoning [1 ,2 ]
Hua, Yang [1 ]
Zhan, Wenjie [1 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi, Jiangsu, Peoples R China
[2] DiTu Suzhou Biotechnol Co Ltd, Suzhou, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT XIV | 2025年 / 15044卷
基金
中国国家自然科学基金;
关键词
Breast tumor; Edge prior information; Region-aware model; Segmentation; FEATURES;
D O I
10.1007/978-981-97-8496-7_35
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Breast UltraSound (BUS) image segmentation is crucial for the diagnosis and analysis of breast cancer. However, most existing methods for BUS tend to overlook the vital edge information. Meanwhile, noise, similar intensity distribution, varying tumor shape and size, will lead to severe missed detection and false detection. To address these issues, we propose a Reverse Region-Aware Network with Edge Difference, called RRANet, which learns edge information and region information from low-level features and high-level features, respectively. Specifically, we first design Edge Difference Convolution (EDC) to fully mine edge information. EDC aggregates intensity and gradient information to obtain edge details of low-level features in both horizontal and vertical directions. Next, we propose a Multi-Scale Adaptive Module (MSAM) that can effectively extract global information from high-level features. MSAM encodes features in the spatial dimension, which expands the receptive field and captures more local contextual information. In addition, we develop the Reverse Region-Aware Module (RRAM) to gradually refine the global information. This module can establish the relationship between region and edge cues, while correcting some erroneous predictions. Finally, the edge information and global information are fused to improve the prediction accuracy of BUS images. Extensive experiments on three challenging public BUS datasets show that our model outperforms several state-of-the-art medical image segmentation methods on benchmark datasets.
引用
收藏
页码:504 / 517
页数:14
相关论文
共 38 条
[31]  
Yang X, 2023, Arxiv, DOI arXiv:2310.14636
[32]   Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks [J].
Yap, Moi Hoon ;
Pons, Gerard ;
Marti, Joan ;
Ganau, Sergi ;
Sentis, Melcior ;
Zwiggelaar, Reyer ;
Davison, Adrian K. ;
Marti, Robert .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2018, 22 (04) :1218-1226
[33]   An effective CNN and Transformer complementary network for medical image segmentation [J].
Yuan, Feiniu ;
Zhang, Zhengxiao ;
Fang, Zhijun .
PATTERN RECOGNITION, 2023, 136
[34]   EGNet: Edge Guidance Network for Salient Object Detection [J].
Zhao, Jia-Xing ;
Liu, Jiang-Jiang ;
Fan, Deng-Ping ;
Cao, Yang ;
Yang, Ju-Feng ;
Cheng, Ming-Ming .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :8778-8787
[35]   UNet plus plus : Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation [J].
Zhou, Zongwei ;
Siddiquee, Md Mahfuzur Rahman ;
Tajbakhsh, Nima ;
Liang, Jianming .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (06) :1856-1867
[36]   The Edge of Depth: Explicit Constraints between Segmentation and Depth [J].
Zhu, Shengjie ;
Brazil, Garrick ;
Liu, Xiaoming .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :13113-13122
[37]   Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI [J].
Zhu, Zhiqin ;
He, Xianyu ;
Qi, Guanqiu ;
Li, Yuanyuan ;
Cong, Baisen ;
Liu, Yu .
INFORMATION FUSION, 2023, 91 :376-387
[38]   An RDAU-NET model for lesion segmentation in breast ultrasound images [J].
Zhuang, Zhemin ;
Li, Nan ;
Raj, Alex Noel Joseph ;
Mahesh, Vijayalakshmi G. V. ;
Qiu, Shunmin .
PLOS ONE, 2019, 14 (08)