MapGen-GAN: A Fast Translator for Remote Sensing Image to Map Via Unsupervised Adversarial Learning

被引:26
作者
Song, Jieqiong [1 ]
Li, Jun [1 ]
Chen, Hao [1 ]
Wu, Jiangjiang [1 ]
机构
[1] Natl Univ Def Technol, Coll Elect Sci & Technol, Changsha 430070, Peoples R China
关键词
Remote sensing; Generative adversarial networks; Deep learning; Task analysis; Training; Internet; Semantics; Adversarial learning; map generation; remote sensing images; unsupervised domain mapping; ROAD EXTRACTION; SEGMENTATION; NETWORKS;
D O I
10.1109/JSTARS.2021.3049905
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Map is an essential medium for people to understand our changing planet. Recently, research on generating and updating maps through remote sensing images has been an important and challenging task in geographic information. Traditional methods for map generation are time-consuming and labor-intensive. Besides, most supervised learning methods for map generation lack labeled training samples. It is challenging to generate maps quickly and efficiently for emergency rescue operations such as earthquakes, fire disasters, or tsunami. In this article, we propose an unsupervised domain mapping model based on adversarial learning called MapGen-GAN. MapGen-GAN is a generative adversarial network (GAN) that can do end-to-end translation from remote sensing images to general map quickly, and trained with no human annotation data. In order to improve the fidelity and the geometry precision of generated maps, we employ circularity-consistency and geometrical-consistency constraints as a part of the loss function of the proposed model. And then, an improved residual block Unet is designed and adopted as the generator of MapGen-GAN to capture the geographic structure information of buildings, roads, and topography outlines under different resolutions in the map generation. By applying the proposed model to two distinct datasets, experiments demonstrate that our model can generate maps efficiently and quickly and outperform the state-of-the-art approaches.
引用
收藏
页码:2341 / 2357
页数:17
相关论文
共 51 条
[1]  
[Anonymous], 2018, ARXIV PREPRINT ARXIV
[2]  
[Anonymous], 2016, ARXIV161102200
[3]  
[Anonymous], 2018, ISPRS J PHOTOGRAMM, DOI [DOI <original-structure></original-structure>https://doi.org/10.1016/j.isprsjprs.2017.11.009, DOI https://doi.org/10.1016/j.isprsjprs.2017.11.009, DOI 10.1016/j.isprsjprs.2017.11.009]
[4]  
[Anonymous], 2017, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2017.244
[5]  
[Anonymous], 2012, Google Maps
[6]  
Benaim S., 2018, PROC NEURAL INF PROC, P2104
[7]  
Brock Andrew, 2016, ARXIV160907093
[8]  
Chen Xiang, 2019, ARXIV190405729
[9]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[10]  
Drozdzal M., 2019, arXiv