Dual-attention EfficientNet based on multi-view feature fusion for cervical squamous intraepithelial lesions diagnosis

被引:12
作者
Guo, Ying [1 ]
Wang, Yongxiong [1 ]
Yang, Huimin [2 ]
Zhang, Jiapeng [1 ]
Sun, Qing [2 ]
机构
[1] Univ Shanghai Sci & Technol, Shanghai 200093, Peoples R China
[2] Wan Nan Med Coll, Affiliated Hosp 1, Wuhu 241000, Peoples R China
基金
上海市自然科学基金;
关键词
Cervical cancer; Colposcopy; Deep-learning; Multi-view; Dual-attention; Disease classification; IMAGE-ANALYSIS; CLASSIFICATION; ACCURACY;
D O I
10.1016/j.bbe.2022.02.009
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Cervicograms are widely used in cervical cancer screening but exhibit a high misdiagnosis rate. Even senior experts show only 48% specificity on clinical examinations. Most existing methods only use single-view images applied with acetic acid or Lugol's iodine solution as their input data, ignoring the fact that non-pathological tissues may show false-positive reactions in these single-view images. This can lead to misdiagnosis in clinical diagnosis. Therefore, it is essential to extract features from multi-view colposcopy images (including the original images) as inputs, because three-view cervicograms provide complementary information. In this work, we propose an improved EfficientNet based on multi-view feature fusion for the automatic diagnosis of cervical squamous intraepithelial lesions. Specifically, EfficientNet-B0 is employed as the backbone network, and three-view images are taken as inputs by channel cascading to reduce misclassification. Additionally, we propose a dual-attention mechanism that implements the feature selection function based on Convolution Block Attention Module (CBAM) and Coordinate Attention (CA). These two attention mechanisms assist each other to enhance the feature representation of HSIL. We leverage a dataset of 3294 clinical cervigrams and obtain 90.0% accuracy with recall, specificity, and F1-Score of 87.1%, 93.0%, and 89.7%, respectively. Experimental results prove that this method can help clinicians with precise disease classification and diagnosis, and out-performs known related works. (C) 2022 Nalecz Institute of Biocybernetics and Biomedical Engineering of the Polish Academy of Sciences. Published by Elsevier B.V. All rights reserved.
引用
收藏
页码:529 / 542
页数:14
相关论文
共 59 条
[51]   CBAM: Convolutional Block Attention Module [J].
Woo, Sanghyun ;
Park, Jongchan ;
Lee, Joon-Young ;
Kweon, In So .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :3-19
[52]  
Xu T, 2015, I S BIOMED IMAGING, P281, DOI 10.1109/ISBI.2015.7163868
[53]   The challenges of colposcopy for cervical cancer screening in LMICs and solutions by artificial intelligence [J].
Xue, Peng ;
Ng, Man Tat Alexander ;
Qiao, Youlin .
BMC MEDICINE, 2020, 18 (01)
[54]   Multi-state colposcopy image fusion for cervical precancerous lesion diagnosis using BF-CNN [J].
Yan, Ling ;
Li, Shufeng ;
Guo, Yi ;
Ren, Peng ;
Song, Haoxuan ;
Yang, Jingjing ;
Shen, Xingfa .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 68 (68)
[55]   Feature fusion combined with tissue Raman spectroscopy to screen cervical cancer [J].
Yang, Bo ;
Chen, Cheng ;
Chen, Fangfang ;
Ma, Cailing ;
Chen, Chen ;
Zhang, Huiting ;
Gao, Rui ;
Zhang, Shuailei ;
Lv, Xiaoyi .
JOURNAL OF RAMAN SPECTROSCOPY, 2021, 52 (11) :1830-1837
[56]   The application of deep learning based diagnostic system to cervical squamous intraepithelial lesions recognition in colposcopy images [J].
Yuan, Chunnv ;
Yao, Yeli ;
Cheng, Bei ;
Cheng, Yifan ;
Li, Ying ;
Li, Yang ;
Liu, Xuechen ;
Cheng, Xiaodong ;
Xie, Xing ;
Wu, Jian ;
Wang, Xinyu ;
Lu, Weiguo .
SCIENTIFIC REPORTS, 2020, 10 (01)
[57]  
Zagoruyko S., 2017, P INT C LEARN REPR I
[58]   The Performance of Artificial Intelligence in Cervical Colposcopy: A Retrospective Data Analysis [J].
Zhao, Yuqian ;
Li, Yucong ;
Xing, Lu ;
Lei, Haike ;
Chen, Duke ;
Tang, Chao ;
Li, Xiaosheng .
JOURNAL OF ONCOLOGY, 2022, 2022
[59]   Learning Deep Features for Discriminative Localization [J].
Zhou, Bolei ;
Khosla, Aditya ;
Lapedriza, Agata ;
Oliva, Aude ;
Torralba, Antonio .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2921-2929