SSIS-Seg: Simulation-Supervised Image Synthesis for Surgical Instrument Segmentation

被引:14
作者
Colleoni, Emanuele [1 ]
Psychogyios, Dimitris [1 ]
Van Amsterdam, Beatrice [1 ]
Vasconcelos, Francisco [1 ]
Stoyanov, Danail [1 ]
机构
[1] Univ Coll London UCL, Wellcome EPSRC Ctr Intervent & Surg Sci WEISS, London WC1E 6BT, England
基金
英国工程与自然科学研究理事会; 欧盟地平线“2020”;
关键词
Image segmentation; Instruments; Data models; Training; Generators; Image synthesis; Surgery; Image-to-image translation; domain adaptation; surgical robot simulators; surgical tool segmentation; surgical vision;
D O I
10.1109/TMI.2022.3178549
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Surgical instrument segmentation can be used in a range of computer assisted interventions and automation in surgical robotics. While deep learning architectures have rapidly advanced the robustness and performance of segmentation models, most are still reliant on supervision and large quantities of labelled data. In this paper, we present a novel method for surgical image generation that can fuse robotic instrument simulation and recent domain adaptation techniques to synthesize artificial surgical images to train surgical instrument segmentation models. We integrate attention modules into well established image generation pipelines and propose a novel cost function to support supervision from simulation frames in model training. We provide an extensive evaluation of our method in terms of segmentation performance along with a validation study on image quality using evaluation metrics. Additionally, we release a novel segmentation dataset from real surgeries that will be shared for research purposes. Both binary and semantic segmentation have been considered, and we show the capability of our synthetic images to train segmentation models compared with the latest methods from the literature.
引用
收藏
页码:3074 / 3086
页数:13
相关论文
共 51 条
[41]  
Salimans T, 2016, ADV NEUR IN, V29
[42]   Learning from Simulated and Unsupervised Images through Adversarial Training [J].
Shrivastava, Ashish ;
Pfister, Tomas ;
Tuzel, Oncel ;
Susskind, Josh ;
Wang, Wenda ;
Webb, Russ .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2242-2251
[43]   Automatic instrument segmentation in robot-assisted surgery using deep learning [J].
Shvets, Alexey A. ;
Rakhlin, Alexander ;
Kalinin, Alexandr A. ;
Iglovikov, Vladimir I. .
2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2018, :624-628
[44]   Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation [J].
Su, Yun-Hsuan ;
Jiang, Wenfan ;
Chitrakar, Digesh ;
Huang, Kevin ;
Peng, Haonan ;
Hannaford, Blake .
SENSORS, 2021, 21 (15)
[45]  
Ulyanov D, 2017, Arxiv, DOI [arXiv:1607.08022, 10.48550/arXiv.1607.08022]
[46]   CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer-Assisted Interventions [J].
Vercauteren, Tom ;
Unberath, Mathias ;
Padoy, Nicolas ;
Navab, Nassir .
PROCEEDINGS OF THE IEEE, 2020, 108 (01) :198-214
[47]   Image quality assessment: From error visibility to structural similarity [J].
Wang, Z ;
Bovik, AC ;
Sheikh, HR ;
Simoncelli, EP .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2004, 13 (04) :600-612
[48]   SimPoE: Simulated Character Control for 3D Human Pose Estimation [J].
Yuan, Ye ;
Wei, Shih-En ;
Simon, Tomas ;
Kitani, Kris ;
Saragih, Jason .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :7155-7165
[49]   Surgical Tool Segmentation Using Generative Adversarial Networks With Unpaired Training Data [J].
Zhang, Zhongkai ;
Rosa, Benoit ;
Nageotte, Florent .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) :6266-6273
[50]   5G ultra-remote robot-assisted laparoscopic surgery in China [J].
Zheng, Jilu ;
Wang, Yonghua ;
Zhang, Jian ;
Guo, Weidong ;
Yang, Xuecheng ;
Luo, Lei ;
Jiao, Wei ;
Hu, Xiao ;
Yu, Zongyi ;
Wang, Chen ;
Zhu, Ling ;
Yang, Ziyi ;
Zhang, Mingxin ;
Xie, Fei ;
Jia, Yuefeng ;
Li, Bin ;
Li, Zhiqiang ;
Dong, Qian ;
Niu, Haitao .
SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, 2020, 34 (11) :5172-5180