Improved breast ultrasound tumor classification using dual-input CNN with GAP-guided attention loss

被引:2
作者
Zou, Xiao [1 ]
Zhai, Jintao [1 ]
Qian, Shengyou [1 ]
Li, Ang [1 ]
Tian, Feng [1 ]
Cao, Xiaofei [2 ]
Wang, Runmin [2 ]
机构
[1] Hunan Normal Univ, Sch Phys & Elect, Changsha 410081, Peoples R China
[2] Hunan Normal Univ, Coll Informat Sci & Engn, Changsha 410081, Peoples R China
基金
中国国家自然科学基金;
关键词
ultrasound image; breast ultrasound tumor; convolutional neural network; feature fusion; classification; IMAGE CLASSIFICATION; RESIDUAL NETWORK; CANCER; DIAGNOSIS; ALGORITHM; FEATURES;
D O I
10.3934/mbe.2023682
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Ultrasonography is a widely used medical imaging technique for detecting breast cancer. While manual diagnostic methods are subject to variability and time-consuming, computer-aided diagnostic (CAD) methods have proven to be more efficient. However, current CAD approaches neglect the impact of noise and artifacts on the accuracy of image analysis. To enhance the precision of breast ultrasound image analysis for identifying tissues, organs and lesions, we propose a novel approach for improved tumor classification through a dual-input model and global average pooling (GAP)-guided attention loss function. Our approach leverages a convolutional neural network with transformer architecture and modifies the single-input model for dual-input. This technique employs a fusion module and GAP operation-guided attention loss function simultaneously to supervise the extraction of effective features from the target region and mitigate the effect of information loss or redundancy on misclassification. Our proposed method has three key features: (i) ResNet and MobileViT are combined to enhance local and global information extraction. In addition, a dual-input channel is designed to include both attention images and original breast ultrasound images, mitigating the impact of noise and artifacts in ultrasound images. (ii) A fusion module and GAP operation-guided attention loss function are proposed to improve the fusion of dual-channel feature information, as well as supervise and constrain the weight of the attention mechanism on the fused focus region. (iii) Using the collected uterine fibroid ultrasound dataset to train ResNet18 and load the pre-trained weights, our experiments on the BUSI and BUSC public datasets demonstrate that the proposed method outperforms some stateof-the-art methods. The code will be publicly released at https://github.com/425877/Improved-Breast-Ultrasound-Tumor-Classification.
引用
收藏
页码:15244 / 15264
页数:21
相关论文
共 58 条
[11]   Breast cancer classification with reduced feature set using association rules and support vector machine [J].
Ed-daoudy, Abderrahmane ;
Maalmi, Khalil .
NETWORK MODELING AND ANALYSIS IN HEALTH INFORMATICS AND BIOINFORMATICS, 2020, 9 (01)
[12]   Speckle reduction in breast cancer ultrasound images by using homogeneity modified bayes shrink [J].
Elyasi, Iman ;
Pourmina, Mohammad Ali ;
Moin, Mohammad-Shahram .
MEASUREMENT, 2016, 91 :55-65
[13]   Res2Net: A New Multi-Scale Backbone Architecture [J].
Gao, Shang-Hua ;
Cheng, Ming-Ming ;
Zhao, Kai ;
Zhang, Xin-Yu ;
Yang, Ming-Hsuan ;
Torr, Philip .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (02) :652-662
[14]   Vision Transformers for Classification of Breast Ultrasound Images [J].
Gheflati, Behnaz ;
Rivaz, Hassan .
2022 44TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC, 2022, :480-483
[15]   Application of decision tree-based ensemble learning in the classification of breast cancer [J].
Ghiasi, Mohammad M. ;
Zendehboudi, Sohrab .
COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 128
[16]   Improving Accuracy of Lung Nodule Classification Using Deep Learning with Focal Loss [J].
Giang Son Tran ;
Thi Phuong Nghiem ;
Van Thi Nguyen ;
Chi Mai Luong ;
Burie, Jean-Christophe .
JOURNAL OF HEALTHCARE ENGINEERING, 2019, 2019
[17]   A Transfer Learning-Based Active Learning Framework for Brain Tumor Classification [J].
Hao, Ruqian ;
Namdar, Khashayar ;
Liu, Lin ;
Khalvati, Farzad .
FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2021, 4
[18]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[19]   RATCHET: Medical Transformer for Chest X-ray Diagnosis and Reporting [J].
Hou, Benjamin ;
Kaissis, Georgios ;
Summers, Ronald M. ;
Kainz, Bernhard .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VII, 2021, 12907 :293-303
[20]   Searching for MobileNetV3 [J].
Howard, Andrew ;
Sandler, Mark ;
Chu, Grace ;
Chen, Liang-Chieh ;
Chen, Bo ;
Tan, Mingxing ;
Wang, Weijun ;
Zhu, Yukun ;
Pang, Ruoming ;
Vasudevan, Vijay ;
Le, Quoc V. ;
Adam, Hartwig .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :1314-1324