SAC-GAN: Structure-Aware Image Composition

被引:2
作者
Zhou, Hang [1 ]
Ma, Rui [2 ,3 ]
Zhang, Ling-Xiao [4 ]
Gao, Lin [4 ]
Mahdavi-Amiri, Ali [1 ]
Zhang, Hao [1 ]
机构
[1] Simon Fraser Univ, Sch Comp Sci, Burnaby, BC V5A 1S6, Canada
[2] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
[3] Minist Educ, Engn Res Ctr Knowledge Driven Human Machine Intell, Changchun 130012, Peoples R China
[4] Chinese Acad Sci, Inst Comp Technol, Beijing 100045, Peoples R China
基金
加拿大自然科学与工程研究理事会;
关键词
Layout; Transforms; Semantics; Three-dimensional displays; Image edge detection; Codes; Coherence; Structure-aware image composition; self-supervision; GANs; VISION;
D O I
10.1109/TVCG.2022.3226689
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We introduce an end-to-end learning framework for image-to-image composition, aiming to plausibly compose an object represented as a cropped patch from an object image into a background scene image. As our approach emphasizes more on semantic and structural coherence of the composed images, rather than their pixel-level RGB accuracies, we tailor the input and output of our network with structure-aware features and design our network losses accordingly, with ground truth established in a self-supervised setting through the object cropping. Specifically, our network takes the semantic layout features from the input scene image, features encoded from the edges and silhouette in the input object patch, as well as a latent code as inputs, and generates a 2D spatial affine transform defining the translation and scaling of the object patch. The learned parameters are further fed into a differentiable spatial transformer network to transform the object patch into the target image, where our model is trained adversarially using an affine transform discriminator and a layout discriminator. We evaluate our network, coined SAC-GAN, for various image composition scenarios in terms of quality, composability, and generalizability of the composite images. Comparisons are made to state-of-the-art alternatives, including Instance Insertion, ST-GAN, CompGAN and PlaceNet, confirming superiority of our method.
引用
收藏
页码:3151 / 3165
页数:15
相关论文
共 74 条
  • [1] Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes
    Abu Alhaija, Hassan
    Mustikovela, Siva Karthik
    Mescheder, Lars
    Geiger, Andreas
    Rother, Carsten
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2018, 126 (09) : 961 - 972
  • [2] Antoniou Antreas, 2017, DATA AUGMENTATION GE
  • [3] Compositional GAN: Learning Image-Conditional Binary Composition
    Azadi, Samaneh
    Pathak, Deepak
    Ebrahimi, Sayna
    Darrell, Trevor
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (10-11) : 2570 - 2585
  • [4] Bhattad A., 2020, AR XIV201005907
  • [5] Brinkmann R., 2008, ART SCI DIGITAL COMP, DOI 10.1016/B978-0-12-370638-6.X0001-6
  • [6] Chang T.-Y., 2020, P AS C COMP VIS, P509
  • [7] Learning Generative Models of 3D Structures
    Chaudhuri, Siddhartha
    Ritchie, Daniel
    Wu, Jiajun
    Xu, Kai
    Zhang, Hao
    [J]. COMPUTER GRAPHICS FORUM, 2020, 39 (02) : 643 - 666
  • [8] Toward Realistic Image Compositing with Adversarial Learning
    Chen, Bor-Chun
    Kae, Andrew
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 8407 - 8416
  • [9] Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709
  • [10] GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving
    Chen, Yun
    Rong, Frieda
    Duggal, Shivam
    Wang, Shenlong
    Yan, Xinchen
    Manivasagam, Sivabalan
    Xue, Shangjie
    Yumer, Ersin
    Urtasun, Raquel
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7226 - 7236