Evaluating the Accuracy of Artificial Intelligence (AI)-Generated Illustrations for Laser-Assisted In Situ Keratomileusis (LASIK), Photorefractive Keratectomy (PRK), and Small Incision Lenticule Extraction (SMILE)

被引:1
|
作者
Petroff, Dallas J. [1 ]
Nasir, Ayesha A. [2 ]
Moin, Kayvon A. [3 ,4 ]
Loveless, Bosten A. [3 ,5 ]
Moshirfar, Omeed A. [6 ]
Hoopes, Phillip C. [3 ]
Moshirfar, Majid [3 ,7 ,8 ]
机构
[1] Idaho Coll Osteopath Med, Ophthalmol, Meridian, ID USA
[2] Univ Louisville, Ophthalmol, Louisville, KY USA
[3] Hoopes Vis, Hoopes Vis Res Ctr, Ophthalmol, Draper, UT 84020 USA
[4] Amer Univ Caribbean, Med, Cupecoy, Sint Maarten
[5] Rocky Vista Univ, Ophthalmol, Coll Osteopath Med, Ivins, UT USA
[6] Washington Univ St Louis, Sam Fox Sch Design & Visual Arts, St Louis, MO USA
[7] Univ Utah, John A Moran Eye Ctr, Ophthalmol, Sch Med, Salt Lake City, UT 84102 USA
[8] Utah Lions Eye Bank, Eye Banking & Corneal Transplantat, Murray, UT 84107 USA
关键词
klex; astigmatism; generative ai model; eye; cornea; myopia; corneal refractive surgery; medical illustration; artificial intelligence;
D O I
10.7759/cureus.67747
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Purpose: To utilize artificial intelligence (AI) platforms to generate medical illustrations for refractive surgeries, aiding patients in visualizing and comprehending procedures like laser-assisted in situ keratomileusis (LASIK), photorefractive keratectomy (PRK), and small incision lenticule extraction (SMILE). This study displays the current performance of two OpenAI programs in terms of their accuracy in common corneal refractive procedures. Methods: We selected AI image generators based on their popularity, choosing Decoder-Only Autoregressive Language and Image Synthesis 3 (DALL-E 3) for its leading position and Medical Illustration Master (MiM) for its high engagement. We developed six non-AI-generated prompts targeting specific outcomes related to LASIK, PRK, and SMILE procedures to assess medical accuracy. We generated images using these prompts (18 total images per AI platform) and used the final images produced after the sixth prompt for this study (three final images per AI platform). Human-created procedural images were also gathered for comparison. Four experts independently graded the images, and their scores were averaged. Each image was evaluated with our grading system on "Legibility," "Detail & Clarity," "Anatomical Realism & Accuracy," "Procedural Step Accuracy," and "Lack of Fictitious Anatomy," with scores ranging from 0 to 3 per category allowing 15 points total. A score of 15 points signifies excellent performance, indicating a highly accurate medical illustration. Conversely, a low score suggests a poor-quality illustration. Additionally, we submitted the same AI-generated images back into Chat Generative Pre-Trained Transformer-4o (ChatGPT-4o) along with our grading system. This allowed ChatGPT-4o to use and evaluate both AI-generated and human-created images (HCIs). Results: In individual category scoring, HCIs significantly outperformed AI images in legibility, anatomical realism, procedural step accuracy, and lack of fictitious anatomy. There were no significant differences between DALL-E 3 and MiM in these categories (p>0.05). In procedure-specific comparisons, HCIs consistently scored higher than AI-generated images for LASIK, PRK, and SMILE. For LASIK, HCIs scored 14 +/- 0.82 (93.3%), while DALL-E 3 scored 4.5 +/- 0.58 (30%) and MiM scored 4.5 +/- 1.91 (30%) (p<0.001). For PRK, HCIs scored 14.5 +/- 0.58 (96.7%), compared to DALL-E 3's 5.25 +/- 1.26 (35%) and MiM's 7 +/- 3.56 (46.7%) (p<0.001). For SMILE, HCIs scored 14.5 +/- 0.68 (96.7%), while DALL-E 3 scored 5 +/- 0.82 (33.3%) and MiM scored 6 +/- 2.71 (40%) (p<0.001). HCIs significantly outperformed AI-generated images from DALL-E 3 and MiM in overall accuracy for medical illustrations, achieving scores of 14.33 +/- 0.23 (95.6%), 4.93 +/- 0.69 (32.8%), and 5.83 +/- 0.23 (38.9%) respectively (p<0.001). ChatGPT-4o evaluations were consistent with human evaluations for HCIs (3 +/- 0, 2.87 +/- 0.23; p=0.121) but rated AI images higher than human evaluators (2 +/- 0 vs 1.07 +/- 0.73; p<0.001). Conclusion: This study highlights the inaccuracy of AI-generated images in illustrating corneal refractive procedures such as LASIK, PRK, and SMILE. Although the OpenAI platform can create images recognizable as eyes, they lack educational value. AI excels in quickly generating creative, vibrant images, but accurate medical illustration remains a significant challenge. While AI performs well with text-based actions, its capability to produce precise medical images needs substantial improvement.
引用
收藏
页数:12
相关论文
共 42 条
  • [21] Aberration compensation between anterior and posterior corneal surfaces after Small incision lenticule extraction and Femtosecond laser-assisted laser in-situ keratomileusis
    Li, Xiaojing
    Wang, Yan
    Dou, Rui
    OPHTHALMIC AND PHYSIOLOGICAL OPTICS, 2015, 35 (05) : 540 - 551
  • [22] Comparison of Visual, Refractive and Ocular Surface Outcomes Between Small Incision Lenticule Extraction and Laser-Assisted In Situ Keratomileusis for Myopia and Myopic Astigmatism
    Yumi Tsz-Ying Lau
    Kendrick Co Shih
    Ryan Hin-Kai Tse
    Tommy Chung-Yan Chan
    Vishal Jhanji
    Ophthalmology and Therapy, 2019, 8 : 373 - 386
  • [23] Comparison of Visual, Refractive and Ocular Surface Outcomes Between Small Incision Lenticule Extraction and Laser-Assisted In Situ Keratomileusis for Myopia and Myopic Astigmatism
    Lau, Yumi Tsz-Ying
    Shih, Kendrick Co
    Tse, Ryan Hin-Kai
    Chan, Tommy Chung-Yan
    Jhanji, Vishal
    OPHTHALMOLOGY AND THERAPY, 2019, 8 (03) : 373 - 386
  • [24] Visual Outcomes after Small Incision Lenticule Extraction and Femtosecond Laser-Assisted LASIK for High Myopia
    Yang, Weiming
    Liu, Shengtao
    Li, Meiyan
    Shen, Yang
    Zhou, Xingtao
    OPHTHALMIC RESEARCH, 2020, 63 (04) : 427 - 433
  • [25] Changes in Ocular Surface and Tear Inflammatory Mediators after Small-incision Lenticule Extraction and Femtosecond Laser-assisted Laser in Situ Keratomileusis
    Zhong, Xingwu
    Gao, Shaohui
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2014, 55 (13)
  • [26] Corneal Densitometry After Small Incision Lenticule Extraction (SMILE) and Femtosecond Laser-Assisted LASIK (FS-LASIK): 5-Year Prospective Comparative Study
    Wei, Ruoyan
    Li, Meiyan
    Yang, Weiming
    Shen, Yang
    Zhao, Yu
    Fu, Dan
    Shang, Jianmin
    Zhang, Jing
    Choi, Joanne
    Zhou, Xingtao
    FRONTIERS IN MEDICINE, 2020, 7
  • [27] Femtosecond laser refractive surgery: small-incision lenticule extraction vs. femtosecond laser-assisted LASIK
    Lee, Jimmy K.
    Chuck, Roy S.
    Park, Choul Yong
    CURRENT OPINION IN OPHTHALMOLOGY, 2015, 26 (04) : 260 - 264
  • [28] Early Changes in Ocular Surface and Tear Inflammatory Mediators after Small-Incision Lenticule Extraction and Femtosecond Laser-Assisted Laser In Situ Keratomileusis
    Gao, Shaohui
    Li, Saiqun
    Liu, Liangping
    Wang, Yong
    Ding, Hui
    Li, Lili
    Zhong, Xingwu
    PLOS ONE, 2014, 9 (09):
  • [29] Six modes of corneal topography for evaluation of ablation zones after small-incision lenticule extraction and femtosecond laser-assisted in situ keratomileusis
    Li, Hua
    Peng, Yusu
    Chen, Min
    Tian, Le
    Li, Dewei
    Zhang, Feifei
    GRAEFES ARCHIVE FOR CLINICAL AND EXPERIMENTAL OPHTHALMOLOGY, 2020, 258 (07) : 1555 - 1563
  • [30] Clinical outcomes of small incision lenticule extraction versus femtosecond laser-assisted LASIK for myopia: a Meta-analysis
    Huan Yan
    Li-Yan Gong
    Wei Huang
    Yan-Li Peng
    International Journal of Ophthalmology, 2017, 10 (09) : 1436 - 1445