Semantic Image Translation for Repairing the Texture Defects of Building Models

被引:0
作者
Shang, Qisen [1 ]
Hu, Han [1 ]
Yu, Haojia [1 ]
Xu, Bo [1 ]
Wang, Libin [1 ]
Zhu, Qing [1 ]
机构
[1] Southwest Jiaotong Univ, Fac Geosci & Environm Engn, Chengdu 611756, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2024年 / 62卷
基金
中国国家自然科学基金;
关键词
Buildings; Semantics; Three-dimensional displays; Solid modeling; Generative adversarial networks; Training; Image synthesis; 3-D building model; generative adversarial network (GAN); image translation; oblique photogrammetry; texture mapping; GENERATION;
D O I
10.1109/TGRS.2023.3338962
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
The accurate representation of 3-D building models in urban environments is significantly hindered by challenges such as texture occlusion, blurring, and missing details, which are difficult to mitigate through standard photogrammetric texture mapping pipelines. Current image completion methods often struggle to produce structured results and effectively handle the intricate nature of highly structured facade textures with diverse architectural styles. Furthermore, existing image synthesis methods encounter difficulties in preserving high-frequency details and artificial regular structures, which are essential for achieving realistic facade texture synthesis. To address these challenges, we introduce a novel approach for synthesizing facade texture images that authentically reflect the architectural style from a structured label map, guided by a ground-truth facade image. In order to preserve fine details and regular structures, we propose a regularity-aware multidomain method that capitalizes on frequency information and corner maps. We also incorporate semantic region-adaptive normalization (SEAN) blocks into our generator to enable versatile style transfer. To generate plausible structured images without undesirable regions, we employ image completion techniques to remove occlusions according to semantics prior to image inference. Our proposed method is also capable of synthesizing texture images with specific styles for facades that lack preexisting textures, using manually annotated labels. Experimental results on publicly available facade image and 3-D model datasets demonstrate that our method yields superior results and effectively addresses issues associated with flawed textures.
引用
收藏
页码:1 / 20
页数:20
相关论文
共 82 条
[1]  
[Anonymous], 1975, Pictorial and Formal Aspects of Shape and Shape Grammars
[2]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[3]   PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing [J].
Barnes, Connelly ;
Shechtman, Eli ;
Finkelstein, Adam ;
Goldman, Dan B. .
ACM TRANSACTIONS ON GRAPHICS, 2009, 28 (03)
[4]   Image inpainting [J].
Bertalmio, M ;
Sapiro, G ;
Caselles, V ;
Ballester, C .
SIGGRAPH 2000 CONFERENCE PROCEEDINGS, 2000, :417-424
[5]   Reconstruction and Efficient Visualization of Heterogeneous 3D City Models [J].
Buyukdemircioglu, Mehmet ;
Kocaman, Sultan .
REMOTE SENSING, 2020, 12 (13)
[6]   Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving [J].
Cai, Mu ;
Zhang, Hong ;
Huang, Huijuan ;
Geng, Qichuan ;
Li, Yixuan ;
Huang, Gao .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :13910-13920
[7]   Masked photo blending: Mapping dense photographic data set on high-resolution sampled 3D models [J].
Callieri, M. ;
Cignoni, P. ;
Corsini, M. ;
Scopigno, R. .
COMPUTERS & GRAPHICS-UK, 2008, 32 (04) :464-473
[8]   Facade geometry generation from low-resolution aerial photographs for building energy modeling [J].
Cao, Jun ;
Metzmacher, Henning ;
O'Donnell, James ;
Frisch, Jerome ;
Bazjanac, Vladimir ;
Kobbelt, Leif ;
van Treeck, Christoph .
BUILDING AND ENVIRONMENT, 2017, 123 :601-624
[9]  
Catmull E. E., 1974, A subdivision algorithm for computer display of curved surfaces
[10]   StyleBank: An Explicit Representation for Neural Image Style Transfer [J].
Chen, Dongdong ;
Yuan, Lu ;
Liao, Jing ;
Yu, Nenghai ;
Hua, Gang .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2770-2779