A Study on Automatic O-RADS Classification of Sonograms of Ovarian Adnexal Lesions Based on Deep Convolutional Neural Networks

被引:1
|
作者
Liu, Tao [1 ]
Miao, Kuo [1 ]
Tan, Gaoqiang [1 ]
Bu, Hanqi [1 ]
Shao, Xiaohui [1 ]
Wang, Siming [1 ]
Dong, Xiaoqiu [1 ]
机构
[1] Harbin Med Univ, Affiliated Hosp 4, Dept Ultrasound Med, Harbin, Heilongjiang, Peoples R China
关键词
Deep convolutional neural network (DCNN); Ovarian adnexal lesions; O-RADS; Sonograms; Deep learning; MULTICENTER; MANAGEMENT; DIAGNOSIS; MODEL;
D O I
10.1016/j.ultrasmedbio.2024.11.009
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Objective: This study explored a new method for automatic O-RADS classification of sonograms based on a deep convolutional neural network (DCNN). Methods: A development dataset (DD) of 2,455 2D grayscale sonograms of 870 ovarian adnexal lesions and an intertemporal validation dataset (IVD) of 426 sonograms of 280 lesions were collected and classified according to O-RADS v2022 (categories 2-5) by three senior sonographers. Classification results verified by a two-tailed z-test to be consistent with the O-RADS v2022 malignancy rate indicated the diagnostic performance was comparable to that of a previous study and were used for training; otherwise, the classification was repeated by two different sonographers. The DD was used to develop three DCNN models (ResNet34, DenseNet121, and ConvNeXt-Tiny) that employed transfer learning techniques. Model performance was assessed for accuracy, precision, and F1 score, among others. The optimal model was selected and validated over time using the IVD and to analyze whether the efficiency of O-RADS classification was improved with the assistance of this model for three sonographers with different years of experience. Results: The proportion of malignant tumors in the DD and IVD in each O-RADS-defined risk category was verified using a two-tailed z-test. Malignant lesions (O-RADS categories 4 and 5) were diagnosed in the DD and IVD with sensitivities of 0.949 and 0.962 and specificities of 0.892 and 0.842, respectively. ResNet34, DenseNet121, and ConvNeXt-Tiny had overall accuracies of 0.737, 0.752, and 0.878, respectively, for sonogram prediction in the DD. The ConvNeXt-Tiny model's accuracy for sonogram prediction in the IVD was 0.859, with no significant difference between test sets. The modeling aid significantly reduced O-RADS classification time for three sonographers (Cohen's d = 5.75). Conclusion: ConvNeXt-Tiny showed robust and stable performance in classifying O-RADS 2-5, improving sonologists' classification efficacy.
引用
收藏
页码:387 / 395
页数:9
相关论文
共 50 条
  • [21] SAR Automatic Target Recognition Based on Deep Convolutional Neural Networks
    Zhan, Rong-hui
    Tian, Zhuang-zhuang
    Hu, Jie-min
    Zhang, Jun
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE: TECHNIQUES AND APPLICATIONS, AITA 2016, 2016, : 170 - 178
  • [22] Deep Convolutional Neural Network for Accurate Classification of Myofibroblastic Lesions on Patch-Based Images
    Giraldo-Roldan, Daniela
    dos Santos, Giovanna Calabrese
    Araujo, Anna Luiza Damaceno
    Nakamura, Thais Cerqueira Reis
    Pulido-Diaz, Katya
    Lopes, Marcio Ajudarte
    Santos-Silva, Alan Roger
    Kowalski, Luiz Paulo
    Moraes, Matheus Cardoso
    Vargas, Pablo Agustin
    HEAD & NECK PATHOLOGY, 2024, 18 (01)
  • [23] ASCII Art Category Classification based on Deep Convolutional Neural Networks
    Fujisawa, Akira
    Matsumoto, Kazuyuki
    Ohta, Kazuki
    Yoshida, Minoru
    Kita, Kenji
    PROCEEDINGS OF 2018 5TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND INTELLIGENCE SYSTEMS (CCIS), 2018, : 345 - 349
  • [24] An Automatic Assessment Model of Adenoid Hypertrophy in MRI Images Based on Deep Convolutional Neural Networks
    He, Ziling
    Xiao, Yubin
    Wu, Xuan
    Liang, Yanchun
    Zhou, You
    An, Guanghui
    IEEE ACCESS, 2023, 11 : 106516 - 106527
  • [25] Automatic feature extraction and classification of Iberian ceramics based on deep convolutional networks
    Cintas, Celia
    Lucena, Manuel
    Manuel Fuertes, Jose
    Delrieux, Claudio
    Navarro, Pablo
    Gonzalez-Jose, Rolando
    Molinos, Manuel
    JOURNAL OF CULTURAL HERITAGE, 2020, 41 : 106 - 112
  • [26] Integrating Contrast-enhanced US to O-RADS US for Classification of Adnexal Lesions with Solid Components: Time-intensity Curve Analysis versus Visual Assessment
    Wu, Manli
    Wang, Ying
    Su, Manting
    Wang, Ruili
    Sun, Xiaofeng
    Zhang, Rui
    Mu, Liang
    Xiao, Li
    Wen, Hong
    Liu, Tingting
    Meng, Xiaotao
    Huang, Licong
    Zhang, Xinling
    RADIOLOGY-IMAGING CANCER, 2024, 6 (06):
  • [27] Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks
    Chen, Yushi
    Jiang, Hanlu
    Li, Chunyang
    Jia, Xiuping
    Ghamisi, Pedram
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2016, 54 (10): : 6232 - 6251
  • [28] Fingerprint Extraction and Classification of Wireless Channels Based on Deep Convolutional Neural Networks
    Yu, Yunlong
    Liu, Fuxian
    Mao, Sheng
    NEURAL PROCESSING LETTERS, 2018, 48 (03) : 1767 - 1775
  • [29] Woven Fabric Pattern Recognition and Classification Based on Deep Convolutional Neural Networks
    Hussain, Muhammad Ather Iqbal
    Khan, Babar
    Wang, Zhijie
    Ding, Shenyi
    ELECTRONICS, 2020, 9 (06) : 1 - 12
  • [30] Fingerprint Extraction and Classification of Wireless Channels Based on Deep Convolutional Neural Networks
    Yunlong Yu
    Fuxian Liu
    Sheng Mao
    Neural Processing Letters, 2018, 48 : 1767 - 1775