iFlowGAN: An Invertible Flow-Based Generative Adversarial Network for Unsupervised Image-to-Image Translation

被引:17
作者
Dai, Longquan [1 ]
Tang, Jinhui [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Flow; bijection; unsupervised image-to-image translation; banach fixed point theorem;
D O I
10.1109/TPAMI.2021.3062849
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose iFlowGAN that learns an invertible flow (a sequence of invertible mappings) via adversarial learning and exploit it to transform a source distribution into a target distribution for unsupervised image-to-image translation. Existing GAN-based generative model such as CycleGAN [1], StarGAN [2], AGGAN [3] and CyCADA [4] needs to learn a highly under-constraint forward mapping F : X -> Y from a source domain X to a target domain Y. Researchers do this by assuming there is a backward mapping B : Y -> X such that x and y are fixed points of the composite functions B omicron F and F omicron B. Inspired by zero-order reverse filtering [5], we (1) understand F via contraction mappings on a metric space; (2) provide a simple yet effective algorithm to present B via the parameters of F in light of Banach fixed point theorem; (3) provide a Lipschitz-regularized network which indicates a general approach to compose the inverse for arbitrary Lipschitz-regularized networks via Banach fixed point theorem. This network is useful for image-to-image translation tasks because it could save the memory for the weights of B. Although memory can also be saved by directly coupling the weights of the forward and backward mappings, the performance of the image-to-image translation network degrades significantly. This explains why current GAN-based generative models including CycleGAN must take different parameters to compose the forward and backward mappings instead of employing the same weights to build both mappings. Taking advantage of the Lipschitz-regularized network, we not only build iFlowGAN to solve the redundancy shortcoming of CycleGAN but also assemble the corresponding iFlowGAN versions of StarGAN, AGGAN and CyCADA without breaking their network architectures. Extensive experiments show that the iFlowGAN version could produce comparable results of the original implementation while saving half parameters.
引用
收藏
页码:4151 / 4162
页数:12
相关论文
共 58 条
[1]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[2]  
Behrmann J, 2019, PR MACH LEARN RES, V97
[3]  
Benaim S, 2017, ADV NEUR IN, V30
[4]  
Brock A., 2018, ARXIV 180911096
[5]  
Chang B, 2018, AAAI CONF ARTIF INTE, P2811
[6]  
Chen C., 2018, INT C MACHINE LEARNI, P824
[7]   Photographic Image Synthesis with Cascaded Refinement Networks [J].
Chen, Qifeng ;
Koltun, Vladlen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1520-1529
[8]  
Chen R T Q., 2018, ADV NEURAL INFORM PR, Vvol 31
[9]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[10]  
Deco G., 1995, Advances in Neural Information Processing Systems 7, P247