Feasibility of using AI to auto-catch responsible frames in ultrasound screening for breast cancer diagnosis

被引:4
作者
Chen, Jing [1 ]
Jiang, Yitao [2 ]
Yang, Keen [1 ]
Ye, Xiuqin [1 ]
Cui, Chen [3 ]
Shi, Siyuan [3 ]
Wu, Huaiyu [1 ]
Tian, Hongtian [1 ]
Song, Di [1 ]
Yao, Jincao [4 ]
Wang, Liping [4 ]
Huang, Sijing [1 ]
Xu, Jinfeng [1 ]
Xu, Dong [4 ]
Dong, Fajin [1 ]
机构
[1] Jinan Univ, Affiliated Hosp 1, Clin Sch Med 2, Shenzhen Peoples Hosp,Dept Ultrasound,Southern Uni, Shenzhen 518020, Guangdong, Peoples R China
[2] Microport Prophecy, Res & Dev Dept, Shanghai 201203, Peoples R China
[3] Illuminate LLC, Res & Dev Dept, Shenzhen 518000, Guangdong, Peoples R China
[4] Univ Chinese Acad Sci, Zhejiang Canc Hosp, Inst Basic Med & Canc IBMC, Chinese Acad Sci,Canc Hosp, Hangzhou 310022, Zhejiang, Peoples R China
关键词
LIVER-DISEASE; ULTRASONOGRAPHY; CLASSIFICATION;
D O I
10.1016/j.isci.2022.105692
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The research of AI-assisted breast diagnosis has primarily been based on static images. It is unclear whether it represents the best diagnosis image.To explore the method of capturing complementary responsible frames from breast ultrasound screening by using artificial intelligence. We used feature entropy breast network (FEBrNet) to select responsible frames from breast ultrasound screenings and compared the diagnostic performance of AI models based on FEBrNet-recommended frames, physician-selected frames, 5-frame interval-selected frames, all frames of video, as well as that of ultrasound and mammography specialists. The AUROC of AI model based on FEBrNet-recommended frames outperformed other frame set based AI models, as well as ultrasound and mammography physicians, indicating that FEBrNet can reach level of medical specialists in frame selection.FEBrNet model can extract video responsible frames for breast nodule diagnosis, whose performance is equivalent to the doctors selected responsible frames.
引用
收藏
页数:14
相关论文
共 47 条
  • [11] An optimized deep learning architecture for the diagnosis of COVID-19 disease based on gravitational search optimization
    Ezzat, Dalia
    Hassanien, Aboul Ella
    Ella, Hassan Aboul
    [J]. APPLIED SOFT COMPUTING, 2021, 98
  • [12] Furht B., 2012, VIDEO IMAGE PROCESSI, P326
  • [13] Deep Learning Algorithms for Detection of Lymph Node Metastases From Breast Cancer Helping Artificial Intelligence Be Seen
    Golden, Jeffrey Alan
    [J]. JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2017, 318 (22): : 2184 - 2186
  • [14] Array programming with NumPy
    Harris, Charles R.
    Millman, K. Jarrod
    van der Walt, Stefan J.
    Gommers, Ralf
    Virtanen, Pauli
    Cournapeau, David
    Wieser, Eric
    Taylor, Julian
    Berg, Sebastian
    Smith, Nathaniel J.
    Kern, Robert
    Picus, Matti
    Hoyer, Stephan
    van Kerkwijk, Marten H.
    Brett, Matthew
    Haldane, Allan
    del Rio, Jaime Fernandez
    Wiebe, Mark
    Peterson, Pearu
    Gerard-Marchant, Pierre
    Sheppard, Kevin
    Reddy, Tyler
    Weckesser, Warren
    Abbasi, Hameer
    Gohlke, Christoph
    Oliphant, Travis E.
    [J]. NATURE, 2020, 585 (7825) : 357 - 362
  • [15] The practical implementation of artificial intelligence technologies in medicine
    He, Jianxing
    Baxter, Sally L.
    Xu, Jie
    Xu, Jiming
    Zhou, Xingtao
    Zhang, Kang
    [J]. NATURE MEDICINE, 2019, 25 (01) : 30 - 36
  • [16] He KM, 2015, Arxiv, DOI arXiv:1512.03385
  • [17] Artificial intelligence in radiology
    Hosny, Ahmed
    Parmar, Chintan
    Quackenbush, John
    Schwartz, Lawrence H.
    Aerts, Hugo J. W. L.
    [J]. NATURE REVIEWS CANCER, 2018, 18 (08) : 500 - 510
  • [18] Matplotlib: A 2D graphics environment
    Hunter, John D.
    [J]. COMPUTING IN SCIENCE & ENGINEERING, 2007, 9 (03) : 90 - 95
  • [19] Jiang Y., 2022, CODE FEBRNET MENDELE, DOI [10.17632/wyjy6pr445.1, DOI 10.17632/WYJY6PR445.1]
  • [20] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90