Improving Generative Adversarial Networks for Patch-Based Unpaired Image-to-Image Translation

被引:0
作者
Boehland, Moritz [1 ]
Bruch, Roman [1 ]
Baeuerle, Simon [1 ]
Rettenberger, Luca [1 ]
Reischl, Markus [1 ]
机构
[1] Karlsruhe Inst Technol, Inst Automat & Appl Informat, D-76344 Eggenstein Leopoldshafen, Germany
关键词
Generative adversarial networks; Training; Training data; Three-dimensional displays; Benchmark testing; Image color analysis; Task analysis; Large scale integration; GAN; unpaired image-to-image translation; 3D image synthesis; stitching; CycleGAN; tiling; large-scale; SEGMENTATION;
D O I
10.1109/ACCESS.2023.3331819
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models for image segmentation achieve high-quality results, but need large amounts of training data. Training data is primarily annotated manually, which is time-consuming and often not feasible for large-scale 2D and 3D images. Manual annotation can be reduced using synthetic training data generated by generative adversarial networks that perform unpaired image-to-image translation. As of now, large images need to be processed patch-wise during inference, resulting in local artifacts in border regions after merging the individual patches. To reduce these artifacts, we propose a new method that integrates overlapping patches into the training process. We incorporated our method into CycleGAN and tested it on our new 2D tiling strategy benchmark dataset. The results show that the artifacts are reduced by 85% compared to state-of-the-art weighted tiling. While our method increases training time, inference time decreases. Additionally, we demonstrate transferability to real-world 3D biological image data, receiving a high-quality synthetic dataset. Increasing the quality of synthetic training datasets can reduce manual annotation, increase the quality of model output, and can help develop and evaluate deep learning models.
引用
收藏
页码:127895 / 127906
页数:12
相关论文
共 38 条
[21]   Few-Shot Unsupervised Image-to-Image Translation [J].
Liu, Ming-Yu ;
Huang, Xun ;
Mallya, Arun ;
Karras, Tero ;
Aila, Timo ;
Lehtinen, Jaakko ;
Kautz, Jan .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :10550-10559
[22]   Annotated high-throughput microscopy image sets for validation [J].
Ljosa, Vebjorn ;
Sokolnicki, Katherine L. ;
Carpenter, Anne E. .
NATURE METHODS, 2012, 9 (07) :637-637
[23]   Simulation of Bright-Field Microscopy Images Depicting Pap-Smear Specimen [J].
Malm, Patrik ;
Brun, Anders ;
Bengtsson, Ewert .
CYTOMETRY PART A, 2015, 87A (03) :212-226
[24]   Deep learning-based segmentation of lithium-ion battery microstructures enhanced by artificially generated electrodes [J].
Mueller, Simon ;
Sauter, Christina ;
Shunmugasundaram, Ramesh ;
Wenzler, Nils ;
De Andrade, Vincent ;
De Carlo, Francesco ;
Konukoglu, Ender ;
Wood, Vanessa .
NATURE COMMUNICATIONS, 2021, 12 (01)
[25]   Contrastive Learning for Unpaired Image-to-Image Translation [J].
Park, Taesung ;
Efros, Alexei A. ;
Zhang, Richard ;
Zhu, Jun-Yan .
COMPUTER VISION - ECCV 2020, PT IX, 2020, 12354 :319-345
[26]   A BaSiC tool for background and shading correction of optical microscopy images [J].
Peng, Tingying ;
Thorn, Kurt ;
Schroeder, Timm ;
Wang, Lichao ;
Theis, Fabian J. ;
Marr, Carsten ;
Navab, Nassir .
NATURE COMMUNICATIONS, 2017, 8
[27]   Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images [J].
Ranjbarzadeh, Ramin ;
Kasgari, Abbas Bagherian ;
Ghoushchi, Saeid Jafarzadeh ;
Anari, Shokofeh ;
Naseri, Maryam ;
Bendechache, Malika .
SCIENTIFIC REPORTS, 2021, 11 (01)
[28]   Cell segmentation and tracking using CNN-based distance predictions and a graph-based matching strategy [J].
Scherr, Tim ;
Loeffler, Katharina ;
Boehland, Moritz ;
Mikut, Ralf .
PLOS ONE, 2020, 15 (12)
[29]  
Schilling M. P., 2021, P 31 WORKSHOP COMPUT, V2021, P211
[30]  
Shen Z., 2020, P IEEE CVF WINT C AP, P1170