共 4 条
Saliency map-guided hierarchical dense feature aggregation framework for breast lesion classification using ultrasound image
被引:11
|作者:
Di, Xiaohui
[1
,2
,3
]
Zhong, Shengzhou
[1
,2
,3
]
Zhang, Yu
[1
,2
,3
]
机构:
[1] Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Peoples R China
[2] Southern Med Univ, Guangdong Prov Key Lab Med Image Proc, Guangzhou 510515, Peoples R China
[3] Southern Med Univ, Guangdong Prov Engn Lab Med Imaging & Diagnost Te, Guangzhou 510515, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Breast lesion classification;
Ultrasound image;
Saliency map;
Hierarchical dense feature aggregation;
Deep learning;
CANCER STATISTICS;
BENIGN;
D O I:
10.1016/j.cmpb.2021.106612
中图分类号:
TP39 [计算机的应用];
学科分类号:
081203 ;
0835 ;
摘要:
Deep learning methods, especially convolutional neural networks, have advanced the breast lesion classification task using breast ultrasound (BUS) images. However, constructing a highly-accurate classification model still remains challenging due to complex pattern, relatively-low contrast and fuzzy boundary existing between lesion regions (i.e., foreground) and the surrounding tissues (i.e., background). Few studies have separated foreground and background for learning domain-specific representations, and then fused them for improving performance of models. In this paper, we propose a saliency map-guided hierarchical dense feature aggregation framework for breast lesion classification using BUS images. Specifically, we first generate saliency maps for foreground and background via super-pixel clustering and multi-scale region grouping. Then, a triple-branch network, including two feature extraction branches and a feature aggregation branch, is constructed to learn and fuse discriminative representations under the guidance of priors provided by saliency maps. In particular, two feature extraction branches take the original image and corresponding saliency map as input for extracting foreground- and background-specific representations. Subsequently, a hierarchical feature aggregation branch receives and fuses the features from different stages of two feature extraction branches, for lesion classification in a task-oriented manner. The proposed model was evaluated on three datasets using 5-fold cross validation, and experimental results have demonstrated that it outperforms several state-of-the-art deep learning methods on breast lesion diagnosis using BUS images. (C) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文