DeSmoke-LAP: improved unpaired image-to-image translation for desmoking in laparoscopic surgery

被引:0
|
作者
Pan, Yirou [1 ]
Bano, Sophia [1 ]
Vasconcelos, Francisco [1 ]
Park, Hyun [2 ]
Jeong, Taikyeong Ted [3 ]
Stoyanov, Danail [1 ]
机构
[1] UCL, Wellcome EPSRC Ctr Intervent & Surg Sci, Dept Comp Sci, London, England
[2] CHA Univ, CHA Bundang Med Ctr, Dept Obstet & Gynecol, Seongnam, South Korea
[3] Hallym Univ, Sch Artificial Intelligence Convergence, Chunchon, South Korea
基金
英国工程与自然科学研究理事会;
关键词
Desmoking; Robotic-assisted laparoscopic hysterectomy; Deep learning; Generative adversarial network;
D O I
10.1007/s11548-077-07595-2
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Purpose Robotic-assisted laparoscopic surgery has become the trend in medicine thanks to its convenience and lower risk of infection against traditional open surgery. However, the visibility during these procedures may severely deteriorate due to electrocauterisation which generates smoke in the operating cavity. This decreased visibility hinders the procedural time and surgical performance. Recent deep learning-based techniques have shown the potential for smoke and glare removal, but few targets laparoscopic videos. Method We propose DeSmoke-LAP, a new method for removing smoke from real robotic laparoscopic hysterectomy videos. The proposed method is based on the unpaired image-to-image cycle-consistent generative adversarial network in which two novel loss functions, namely, inter-channel discrepancies and dark channel prior, are integrated to facilitate smoke removal while maintaining the true semantics and illumination of the scene. Results DeSmoke-LAP is compared with several state-of-the-art desmoking methods qualitatively and quantitatively using referenceless image quality metrics on 10 laparoscopic hysterectomy videos through 5-fold cross-validation. Conclusion DeSmoke-LAP outperformed existing methods and generated smoke-free images without applying ground truths (paired images) and atmospheric scattering model. This shows distinctive achievement in dehazing in surgery, even in scenarios with partial inhomogenenous smoke. Our code and hysterectomy dataset will be made publicly available at https://www.ucl. ac.uk/interventional-surgical-sciences/weiss-open-research/weiss-open-data-server/desmoke-lap.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Deep learning for thermal-RGB image-to-image translation
    Wadsworth, Emma
    Mahajan, Advait
    Prasad, Raksha
    Menon, Rajesh
    INFRARED PHYSICS & TECHNOLOGY, 2024, 141
  • [42] Deep Networks for Image-to-Image Translation with Mux and Demux Layers
    Liu, Hanwen
    Michelini, Pablo Navarrete
    Zhu, Dan
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT V, 2019, 11133 : 150 - 165
  • [43] Image-to-Image Translation Based Face De-Occlusion
    Maharjan, Rahul S.
    Din, Nizam Ud
    Yi, Juneho
    TWELFTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2020), 2020, 11519
  • [44] TriGAN: image-to-image translation for multi-source domain adaptation
    Roy, Subhankar
    Siarohin, Aliaksandr
    Sangineto, Enver
    Sebe, Nicu
    Ricci, Elisa
    MACHINE VISION AND APPLICATIONS, 2021, 32 (01)
  • [45] TriGAN: image-to-image translation for multi-source domain adaptation
    Subhankar Roy
    Aliaksandr Siarohin
    Enver Sangineto
    Nicu Sebe
    Elisa Ricci
    Machine Vision and Applications, 2021, 32
  • [46] Image-to-Image Translation Using Identical-Pair Adversarial Networks
    Sung, Thai Leang
    Lee, Hyo Jong
    APPLIED SCIENCES-BASEL, 2019, 9 (13):
  • [47] Masked Style Transfer for Source-Coherent Image-to-Image Translation
    Botti, Filippo
    Fontanini, Tomaso
    Bertozzi, Massimo
    Prati, Andrea
    APPLIED SCIENCES-BASEL, 2024, 14 (17):
  • [48] Unsupervised image-to-image translation with multiscale attention generative adversarial network
    Wang, Fasheng
    Zhang, Qing
    Zhao, Qianyi
    Wang, Mengyin
    Sun, Fuming
    APPLIED INTELLIGENCE, 2024, 54 (08) : 6558 - 6578
  • [49] Image-to-Image Translation Method for Game-Character Face Generation
    Kang, Shinjin
    Ok, Yoonchan
    Kim, Hwanhee
    Hahn, Teasung
    2020 IEEE CONFERENCE ON GAMES (IEEE COG 2020), 2020, : 628 - 631
  • [50] FFPE plus plus : Improving the quality of formalin-fixed paraffin-embedded tissue imaging via contrastive unpaired image-to-image translation
    Kassab, Mohamad
    Jehanzaib, Muhammad
    Basak, Kayhan
    Demir, Derya
    Keles, G. Evren
    Turan, Mehmet
    MEDICAL IMAGE ANALYSIS, 2024, 91