A Study on Automatic O-RADS Classification of Sonograms of Ovarian Adnexal Lesions Based on Deep Convolutional Neural Networks

被引:1
|
作者
Liu, Tao [1 ]
Miao, Kuo [1 ]
Tan, Gaoqiang [1 ]
Bu, Hanqi [1 ]
Shao, Xiaohui [1 ]
Wang, Siming [1 ]
Dong, Xiaoqiu [1 ]
机构
[1] Harbin Med Univ, Affiliated Hosp 4, Dept Ultrasound Med, Harbin, Heilongjiang, Peoples R China
关键词
Deep convolutional neural network (DCNN); Ovarian adnexal lesions; O-RADS; Sonograms; Deep learning; MULTICENTER; MANAGEMENT; DIAGNOSIS; MODEL;
D O I
10.1016/j.ultrasmedbio.2024.11.009
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Objective: This study explored a new method for automatic O-RADS classification of sonograms based on a deep convolutional neural network (DCNN). Methods: A development dataset (DD) of 2,455 2D grayscale sonograms of 870 ovarian adnexal lesions and an intertemporal validation dataset (IVD) of 426 sonograms of 280 lesions were collected and classified according to O-RADS v2022 (categories 2-5) by three senior sonographers. Classification results verified by a two-tailed z-test to be consistent with the O-RADS v2022 malignancy rate indicated the diagnostic performance was comparable to that of a previous study and were used for training; otherwise, the classification was repeated by two different sonographers. The DD was used to develop three DCNN models (ResNet34, DenseNet121, and ConvNeXt-Tiny) that employed transfer learning techniques. Model performance was assessed for accuracy, precision, and F1 score, among others. The optimal model was selected and validated over time using the IVD and to analyze whether the efficiency of O-RADS classification was improved with the assistance of this model for three sonographers with different years of experience. Results: The proportion of malignant tumors in the DD and IVD in each O-RADS-defined risk category was verified using a two-tailed z-test. Malignant lesions (O-RADS categories 4 and 5) were diagnosed in the DD and IVD with sensitivities of 0.949 and 0.962 and specificities of 0.892 and 0.842, respectively. ResNet34, DenseNet121, and ConvNeXt-Tiny had overall accuracies of 0.737, 0.752, and 0.878, respectively, for sonogram prediction in the DD. The ConvNeXt-Tiny model's accuracy for sonogram prediction in the IVD was 0.859, with no significant difference between test sets. The modeling aid significantly reduced O-RADS classification time for three sonographers (Cohen's d = 5.75). Conclusion: ConvNeXt-Tiny showed robust and stable performance in classifying O-RADS 2-5, improving sonologists' classification efficacy.
引用
收藏
页码:387 / 395
页数:9
相关论文
共 50 条
  • [31] Deep Convolutional Neural Networks for mental load classification based on EEG data
    Jiao, Zhicheng
    Gao, Xinbo
    Wang, Ying
    Li, Jie
    Xu, Haojun
    PATTERN RECOGNITION, 2018, 76 : 582 - 595
  • [32] Diagnostic accuracy of automated ACR BI-RADS breast density classification using deep convolutional neural networks
    Raphael Sexauer
    Patryk Hejduk
    Karol Borkowski
    Carlotta Ruppert
    Thomas Weikert
    Sophie Dellas
    Noemi Schmidt
    European Radiology, 2023, 33 : 4589 - 4596
  • [33] Diagnostic accuracy of automated ACR BI-RADS breast density classification using deep convolutional neural networks
    Sexauer, Raphael
    Hejduk, Patryk
    Borkowski, Karol
    Ruppert, Carlotta
    Weikert, Thomas
    Dellas, Sophie
    Schmidt, Noemi
    EUROPEAN RADIOLOGY, 2023, 33 (07) : 4589 - 4596
  • [34] Automatic Detection of Cracks in Cracked Tooth Based on Binary Classification Convolutional Neural Networks
    Guo, Juncheng
    Wu, Yuyan
    Chen, Lizhi
    Ge, Guanghua
    Tang, Yadong
    Wang, Wenlong
    APPLIED BIONICS AND BIOMECHANICS, 2022, 2022
  • [35] An automatic nuclei segmentation method based on deep convolutional neural networks for histopathology images
    Hwejin Jung
    Bilal Lodhi
    Jaewoo Kang
    BMC Biomedical Engineering, 1 (1):
  • [36] Non-invasive multi-channel deep learning convolutional neural networks for localization and classification of common hepatic lesions
    Shah, Shubham
    Mishra, Ruby
    Szczurowska, Agata
    Guzinski, Maciej
    POLISH JOURNAL OF RADIOLOGY, 2021, 86 : E440 - E448
  • [37] A study on the recognition of monkeypox infection based on deep convolutional neural networks
    Chen, Junkang
    Han, Junying
    FRONTIERS IN IMMUNOLOGY, 2023, 14
  • [38] Robust Automatic Modulation Classification Using Convolutional Deep Neural Network Based on Scalogram Information
    Abdulkarem, Ahmed Mohammed
    Abedi, Firas
    Ghanimi, Hayder M. A.
    Kumar, Sachin
    Al-Azzawi, Waleed Khalid
    Abbas, Ali Hashim
    Abosinnee, Ali S.
    Almaameri, Ihab Mahdi
    Alkhayyat, Ahmed
    COMPUTERS, 2022, 11 (11)
  • [39] A deep learning method based on convolutional neural network for automatic modulation classification of wireless signals
    Xu, Yu
    Li, Dezhi
    Wang, Zhenyong
    Guo, Qing
    Xiang, Wei
    WIRELESS NETWORKS, 2019, 25 (07) : 3735 - 3746
  • [40] PolSAR Image Classification Based on Deep Convolutional Neural Networks Using Wavelet Transformation
    Jamali, Ali
    Mahdianpari, Masoud
    Mohammadimanesh, Fariba
    Bhattacharya, Avik
    Homayouni, Saeid
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19