Progressive matching method of aerial-ground remote sensing image via multi-scale context feature coding

被引:2
|
作者
Xu, Chuan [1 ]
Xu, Junjie [1 ]
Huang, Tao [1 ]
Zhang, Huan [1 ]
Mei, Liye [1 ,2 ]
Zhang, Xia [3 ]
Duan, Yu [3 ]
Yang, Wei [3 ,4 ]
机构
[1] Hubei Univ Technol, Sch Comp Sci, Wuhan, Peoples R China
[2] Wuhan Univ, Inst Technol Sci, Wuhan, Peoples R China
[3] Wuchang Shouyi Univ, Sch Informat Sci & Engn, Wuhan, Peoples R China
[4] Wuchang Shouyi Univ, Sch Informat Sci & Engn, Wuhan 430064, Peoples R China
关键词
3D model; aerial-ground remote sensing image; large buildings; feature matching; deep learning; 3D RECONSTRUCTION;
D O I
10.1080/01431161.2023.2255352
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
The fine 3D model is the essential spatial information for the construction of a smart city. UAV aerial images with large-scale scene perception ability are common data sources for 3D modelling of cities at present. However, in some complex urban areas, a single aerial image is difficult to capture the 3D scene information because of the existence of some problems such as inaccurate edges, holes, and blurred building facade textures due to changes in perspective and area occlusion. Therefore, how to solve perspective changes and area occlusion of the aerial image quickly and efficiently has become an important problem. The ground image can be used as an important supplement to solve the problem of missing bottom and area occlusion in oblique photography modelling. Thus, this article proposes a progressive matching method via multi-scale context feature coding network to achieve robust matching of aerial-ground remote sensing images, which provides better technical support for urban modelling. The main idea consists of three parts: (1) a multi-scale context feature coding network is designed to extract feature on aerial-ground images efficiently; (2) a block-based matching strategy is proposed to pay more attention to local features of the aerial-ground images; (3) a progressive matching method is applied in block matching stage to obtain more accurate features. We used eight sets of typical data, such as aerial images captured by the drone DJI-MAVIC2 and ground images captured by handheld devices as experimental objects, and compared them with algorithms such as SIFT, D2-net, DFM and SuperGlue. Experimental results show that our proposed aerial-ground image matching method has a good performance that the average NCM has improved 2.1-8.2 times, and the average rate of correct matching has an average increase of 26% points with the average root of mean square error is only 1.48 pixels.
引用
收藏
页码:5876 / 5895
页数:20
相关论文
共 50 条
  • [41] A Multi-Scale Approach for Remote Sensing Scene Classification Based on Feature Maps Selection and Region Representation
    Zhang, Jun
    Zhang, Min
    Shi, Lukui
    Yan, Wenjie
    Pan, Bin
    REMOTE SENSING, 2019, 11 (21)
  • [42] OAMSFNet: Orientation-Aware and Multi-Scale Feature Fusion Network for shadow detection in remote sensing images via pseudo shadow
    Feng, Jiangfan
    Liu, Juntao
    Gu, Zhujun
    Zheng, Wei
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2023, 44 (17) : 5473 - 5495
  • [43] A High Precision Feature Matching Method Based on Geometrical Outlier Removal for Remote Sensing Image Registration
    Yang, Han
    Li, Xiaorun
    Ma, Yijian
    Zhao, Liaoying
    Chen, Shuhan
    IEEE ACCESS, 2019, 7 : 180027 - 180038
  • [44] Single image super-resolution via deep progressive multi-scale fusion networks
    Yue Que
    Hyo Jong Lee
    Neural Computing and Applications, 2022, 34 : 10707 - 10717
  • [45] Single image super-resolution via deep progressive multi-scale fusion networks
    Que, Yue
    Lee, Hyo Jong
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (13) : 10707 - 10717
  • [46] Advancements in Remote Sensing Image Dehazing: Introducing URA-Net with Multi-Scale Dense Feature Fusion Clusters and Gated Jump Connection
    Liu, Hongchi
    Deng, Xing
    Shao, Haijian
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2024, 140 (03): : 2397 - 2424
  • [47] MSANet: an improved semantic segmentation method using multi-scale attention for remote sensing images
    Zhang, Xiaolu
    Wang, Zhaoshun
    Zhang, Jianheng
    Wei, Anlei
    REMOTE SENSING LETTERS, 2022, 13 (12) : 1249 - 1259
  • [48] Image deblurring via multi-scale feature fusion and multi-input multi-output encoder-decoder
    Zhao Q.
    Zhou D.
    Yang H.
    Wang C.
    Li M.
    Hongwai yu Jiguang Gongcheng/Infrared and Laser Engineering, 2022, 51 (10):
  • [49] MC-Net: multi-scale contextual information aggregation network for image captioning on remote sensing images
    Huang, Haiyan
    Shao, Zhenfeng
    Cheng, Qimin
    Huang, Xiao
    Wu, Xiaoping
    Li, Guoming
    Tan, Li
    INTERNATIONAL JOURNAL OF DIGITAL EARTH, 2023, 16 (02) : 4848 - 4866
  • [50] High-resolution mapping of GDP using multi-scale feature fusion by integrating remote sensing and POI data
    Wu, Nan
    Yan, Jining
    Liang, Dong
    Sun, Zhongchang
    Ranjan, Rajiv
    Li, Jun
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 129