Two-stage deep learning framework for occlusal crown depth image generation

被引:0
作者
Roh, Junghyun [1 ]
Kim, Junhwi [2 ]
Lee, Jimin [1 ,3 ]
机构
[1] Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology, 50, UNIST-gil, Ulsan
[2] Steinfeld Co., 75 Clarendon Ave, San Francisco, 94114, CA
[3] Department of Nuclear Engineering, Ulsan National Institute of Science and Technology, 50, UNIST-gil, Ulsan
基金
新加坡国家研究基金会;
关键词
Dental image translation; Generative adversarial network; Inpainting; Medical image segmentation; Occlusal depth image;
D O I
10.1016/j.compbiomed.2024.109220
中图分类号
学科分类号
摘要
The generation of depth images of occlusal dental crowns is complicated by the need for customization in each case. To decrease the workload of skilled dental technicians, various computer vision models have been used to generate realistic occlusal crown depth images with definite crown surface structures that can ultimately be reconstructed to three-dimensional crowns and directly used in patient treatment. However, it has remained difficult to generate images of the structure of dental crowns in a fluid position using computer vision models. In this paper, we propose a two-stage model for generating depth images of occlusal crowns in diverse positions. The model is divided into two parts: segmentation and inpainting to obtain both shape and surface structure accuracy. The segmentation network focuses on the position and size of the crowns, which allows the model to adapt to diverse targets. The inpainting network based on a GAN generates curved structures of the crown surfaces based on the target jaw image and a binary mask made by the segmentation network. The performance of the model is evaluated via quantitative metrics for the area detection and pixel-value metrics. Compared to the baseline model, the proposed method reduced the MSE score from 0.007001 to 0.002618 and increased DICE score from 0.9333 to 0.9648. It indicates that the model showed better performance in terms of the binary mask from the addition of the segmentation network and the internal structure through the use of inpainting networks. Also, the results demonstrated an improved ability of the proposed model to restore realistic details compared to other models. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 41 条
[1]  
Wu H., Pan J., Li Z., Wen Z., Qin J., Automated skin lesion segmentation via an adaptive dual attention module, IEEE Trans. Med. Imaging, 40, 1, pp. 357-370, (2021)
[2]  
Ozturk S., Cukur T., Focal modulation network for lung segmentation in chest X-ray images, Turk. J. Electr. Eng. Comput. Sci., 31, 6, pp. 1006-1020, (2023)
[3]  
Ding S., Zheng J., Liu Z., Zheng Y., Chen Y., Xu X., Lu J., Xie J., High-resolution dermoscopy image synthesis with conditional generative adversarial networks, Biomed. Signal Process. Control, 64, (2021)
[4]  
Susic I., Travar M., Susic M., The application of CAD/CAM technology in dentistry, IOP Conference Series: Materials Science and Engineering, 200, (2017)
[5]  
Son L.H., Tuan T.M., Fujita H., Dey N., Ashour A.S., Ngoc V.T.N., Anh L.Q., Chu D.-T., Dental diagnosis from X-Ray images: An expert system based on fuzzy computing, Biomed. Signal Process. Control, 39, pp. 64-73, (2018)
[6]  
Tian S., Wang M., Yuan F., Dai N., Sun Y., Xie W., Qin J., Efficient computer-aided design of dental inlay restoration: A deep adversarial framework, IEEE Trans. Med. Imaging, 40, 9, pp. 2415-2427, (2021)
[7]  
Hong Y., Hwang U., Yoo J., Yoon S., How generative adversarial networks and their variants work: An overview, ACM Comput. Surv., 52, 1, (2019)
[8]  
Yi X., Walia E., Babyn P., Generative adversarial network in medical imaging: A review, Med. Image Anal., 58, (2019)
[9]  
Hwang J.-J., Azernikov S., Efros A.A., Yu S.X., Learning beyond human expertise with generative models for dental restorations, (2018)
[10]  
Yuan F., Dai N., Tian S., Zhang B., Sun Y., Yu Q., Liu H., Personalized design technique for the dental occlusal surface based on conditional generative adversarial networks, Int. J. Numer. Methods Biomed. Eng., 36, 5, (2020)