Ground-to-Aerial Image Geo-Localization with Cross-View Image Synthesis

被引:2
|
作者
Huang, Jiaqing [1 ]
Ye, Dengpan [1 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Siamese network; Image geo-localization; Image synthesis;
D O I
10.1007/978-3-030-87361-5_34
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The task of ground-to-aerial image geo-localization can be achieved by matching a ground view query image to aerial images with geographic labels in a reference database. It remains challenging due to the drastic change in viewpoint. In this paper, we propose a new cross-view image synthesis conditional generative adversarial networks (cGAN) called Crossview Sequential Fork (CSF) to generate ground images from aerial images. CSF achieves a more detailed synthesis effect by the generation of segmentation maps and edge detection images. And the synthesis ground images are input to the image matching framework Cross View Synthesis Net (CVS-Net) to assist geo-localization, the distance between the descriptors of source ground image and synthesis ground image is calculated to assist the training of the network. CVS-Net is leveraged on the Siamese architecture to do metric learning for the matching task. Moreover, we introduce SARE loss as part of the training procedure and improve it by our data entry form which greatly improves the convergence rate and image retrieval accuracy compared to traditional triplet loss. Experimental results demonstrate the effectiveness and superiority of our proposed method over the state-of-the-art method on two benchmark datasets.
引用
收藏
页码:412 / 424
页数:13
相关论文
共 50 条
  • [31] AMPLE: Automatic Progressive Learning for Orientation Unknown Ground-to-Aerial Geo-Localization
    Li, Chaoran
    Yan, Chao
    Xiang, Xiaojia
    Lai, Jun
    Zhou, Han
    Tang, Dengqing
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [32] Lending Orientation to Neural Networks for Cross-view Geo-localization
    Liu, Liu
    Li, Hongdong
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5607 - 5616
  • [33] Fusing Geometric and Scene Information for Cross-View Geo-Localization
    Guo, Siyuan
    Liu, Tianying
    Li, Wengen
    Guan, Jihong
    Zhou, Shuigeng
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3978 - 3982
  • [34] Perceptual Feature Fusion Network for Cross-View Geo-Localization
    Wang, Jiayi
    Chen, Ziyang
    Yuan, Xiaochen
    Zhao, Genping
    Computer Engineering and Applications, 60 (03): : 255 - 262
  • [35] Cross-view Geo-localization Based on Cross-domain Matching
    Wu, Xiaokang
    Ma, Qianguang
    Li, Qi
    Yu, Yuanlong
    Liu, Wenxi
    ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022, 2023, 153 : 719 - 728
  • [36] Cross-View Visual Geo-Localization for Outdoor Augmented Reality
    Mithun, Niluthpol Chowdhury
    Minhas, Kshitij S.
    Chiu, Han-Pang
    Oskiper, Taragay
    Sizintsev, Mikhail
    Samarasekera, Supun
    Kumar, Rakesh
    2023 IEEE CONFERENCE VIRTUAL REALITY AND 3D USER INTERFACES, VR, 2023, : 493 - 502
  • [37] Cross-view Geo-localization with Layer-to-Layer Transformer
    Yang, Hongji
    Lu, Xiufan
    Zhu, Yingying
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [38] A Practical Cross-View Image Matching Method between UAV and Satellite for UAV-Based Geo-Localization
    Ding, Lirong
    Zhou, Ji
    Meng, Lingxuan
    Long, Zhiyong
    REMOTE SENSING, 2021, 13 (01) : 1 - 22
  • [39] Revisiting Street-to-Aerial View Image Geo-localization and Orientation Estimation
    Zhu, Sijie
    Yang, Taojiannan
    Chen, Chen
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 756 - 765
  • [40] UAV Geo-Localization Dataset and Method Based on Cross-View Matching
    Yao, Yuwen
    Sun, Cheng
    Wang, Tao
    Yang, Jianxing
    Zheng, Enhui
    SENSORS, 2024, 24 (21)