Sex estimation from maxillofacial radiographs using a deep learning approach

被引:2
作者
Hase, Hiroki [1 ]
Mine, Yuichi [1 ,2 ]
Okazaki, Shota [1 ,2 ]
Yoshimi, Yuki [3 ]
Ito, Shota
Peng, Tzu-Yu [4 ]
Sano, Mizuho [1 ]
Koizumi, Yuma
Kakimoto, Naoya [4 ]
Tanimoto, Kotaro [5 ]
Murayama, Takeshi [1 ,2 ]
机构
[1] Hiroshima Univ, Grad Sch Biomed & Hlth Sci, Dept Med Syst Engn, 1-2-3 Kasumi, Minami Ku, Hiroshima 7348553, Japan
[2] Hiroshima Univ, Project Res Ctr Integrating Digital Dent, 1-2-3 Kasumi,Minami Ku, Hiroshima 7348553, Japan
[3] Hiroshima Univ, Grad Sch Biomed & Hlth Sci, Dept Orthodont & Craniofacial Dev Biol, 1-2-3 Kasumi,Minami Ku, Hiroshima 7348553, Japan
[4] Taipei Med Univ, Coll Oral Med, Sch Dent, 250 Wu Hosing St, Taipei 11031, Taiwan
[5] Hiroshima Univ, Grad Sch Biomed & Hlth Sci, Dept Oral & Maxillofacial Radiol, 1-2-3 Kasumi,Minami Ku, Hiroshima 7348553, Japan
关键词
Artificial intelligence; Deep learning; Sex estimation; Maxillofacial radiograph; Lateral cephalogram; ARTIFICIAL-INTELLIGENCE; CLASSIFICATION;
D O I
10.4012/dmj.2023-253
中图分类号
R78 [口腔科学];
学科分类号
1003 ;
摘要
The purpose of this study was to construct deep learning models for more efficient and reliable sex estimation. Two deep learning models, VGG16 and DenseNet-121, were used in this retrospective study. In total, 600 lateral cephalograms were analyzed. A saliency map was generated by gradient-weighted class activation mapping for each output. The two deep learning models achieved high values in each performance metric according to accuracy, sensitivity (recall), precision, F1 score, and areas under the receiver operating characteristic curve. Both models showed substantial differences in the positions indicated in saliency maps for male and female images. The positions in saliency maps also differed between VGG16 and DenseNet-121, regardless of sex. This analysis of our proposed system suggested that sex estimation from lateral cephalograms can be achieved with high accuracy using deep learning
引用
收藏
页码:394 / 399
页数:6
相关论文
共 27 条
[21]   A system for designing removable partial dentures using artificial intelligence. Part 1. Classification of partially edentulous arches using a convolutional neural network [J].
Takahashi, Toshihito ;
Nozaki, Kazunori ;
Gonda, Tomoya ;
Ikebe, Kazunori .
JOURNAL OF PROSTHODONTIC RESEARCH, 2021, 65 (01) :115-118
[22]   Expert-level detection of pathologies from unannotated chest X-ray images via self-supervised learning [J].
Tiu, Ekin ;
Talius, Ellie ;
Patel, Pujan ;
Langlotz, Curtis P. ;
Ng, Andrew Y. ;
Rajpurkar, Pranav .
NATURE BIOMEDICAL ENGINEERING, 2022, 6 (12) :1399-1406
[23]   Evaluation of machine learning algorithms for health and wellness applications: A tutorial [J].
Tohka, Jussi ;
van Gils, Mark .
COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 132 (132)
[24]   Deep learning models in medical image analysis [J].
Tsuneki, Masayuki .
JOURNAL OF ORAL BIOSCIENCES, 2022, 64 (03) :312-320
[25]   Explainable artificial intelligence (XAI) in deep learning-based medical image analysis [J].
Van der Velden, Bas H. M. ;
Kuijf, Hugo J. ;
Gilhuijs, Kenneth G. A. ;
Viergever, Max A. .
MEDICAL IMAGE ANALYSIS, 2022, 79
[26]   Predicting the Debonding of CAD/CAM Composite Resin Crowns with AI [J].
Yamaguchi, S. ;
Lee, C. ;
Karaer, O. ;
Ban, S. ;
Mine, A. ;
Imazato, S. .
JOURNAL OF DENTAL RESEARCH, 2019, 98 (11) :1234-1238
[27]   External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review [J].
Yu, Alice C. ;
Mohajer, Bahram ;
Eng, John .
RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2022, 4 (03)