StateNet: Deep State Learning for Robust Feature Matching of Remote Sensing Images

被引:13
作者
Chen, Jiaxuan [1 ]
Chen, Shuang [1 ]
Chen, Xiaoxian [1 ,2 ]
Yang, Yang [1 ]
Rao, Yujing [1 ]
机构
[1] Yunnan Normal Univ, Sch Informat Sci & Technol, Lab Pattern Recognit & Artificial Intelligence, Kunming 650500, Yunnan, Peoples R China
[2] JD Com Inc, Beijing 100000, Peoples R China
基金
中国国家自然科学基金;
关键词
Pattern matching; Task analysis; Deep learning; Artificial neural networks; Remote sensing; Learning systems; Costs; Adaptive state learning (ASL); deep learning; feature matching; image matching; image registration; MOTION STATISTICS; OPTIMIZATION; LOCALITY; MODEL;
D O I
10.1109/TNNLS.2021.3120768
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Seeking good correspondences between two images is a fundamental and challenging problem in the remote sensing (RS) community, and it is a critical prerequisite in a wide range of feature-based visual tasks. In this article, we propose a flexible and general deep state learning network for both rigid and nonrigid feature matching, which provides a mechanism to change the state of matches into latent canonical forms, thereby weakening the degree of randomness in matching patterns. Different from the current conventional strategies (i.e., imposing a global geometric constraint or designing additional handcrafted descriptor), the proposed StateNet is designed to perform alternating two steps: 1) recalibrates matchwise feature responses in the spatial domain and 2) leverages the spatially local correlation across two sets of feature points for transformation update. For this purpose, our network contains two novel operations: adaptive dual-aggregation convolution (ADAConv) and point rendering layer (PRL). These two operations are differentiable, so our network can be inserted into the existing classification architecture to reduce the cost of establishing reliable correspondences. To demonstrate the robustness and universality of our approach, extensive experiments on various real image pairs for feature matching are conducted. Experiments reveal the superiority of our StateNet significantly over the state-of-the-art alternatives.
引用
收藏
页码:3284 / 3298
页数:15
相关论文
共 60 条
[1]   Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry [J].
Abdel-Aziz, Y. I. ;
Karara, H. M. .
PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, 2015, 81 (02) :103-107
[2]   MAGSAC plus plus , a fast, reliable and accurate robust estimator [J].
Barath, Daniel ;
Noskova, Jana ;
Ivashechkin, Maksym ;
Matas, Jiri .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1301-1309
[3]   MAGSAC: Marginalizing Sample Consensus [J].
Barath, Daniel ;
Matas, Jiri ;
Noskova, Jana .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10189-10197
[4]   MULTIDIMENSIONAL BINARY SEARCH TREES USED FOR ASSOCIATIVE SEARCHING [J].
BENTLEY, JL .
COMMUNICATIONS OF THE ACM, 1975, 18 (09) :509-517
[5]   A METHOD FOR REGISTRATION OF 3-D SHAPES [J].
BESL, PJ ;
MCKAY, ND .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (02) :239-256
[6]   GMS: Grid-Based Motion Statistics for Fast, Ultra-robust Feature Correspondence [J].
Bian, Jia-Wang ;
Lin, Wen-Yan ;
Liu, Yun ;
Zhang, Le ;
Yeung, Sai-Kit ;
Cheng, Ming-Ming ;
Reid, Ian .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (06) :1580-1593
[7]   GMS: Grid-based Motion Statistics for Fast, Ultra-robust Feature Correspondence [J].
Bian, JiaWang ;
Lin, Wen-Yan ;
Matsushita, Yasuyuki ;
Yeung, Sai-Kit ;
Nguyen, Tan-Dat ;
Cheng, Ming-Ming .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2828-2837
[9]   Robust Local Structure Visualization for Remote Sensing Image Registration [J].
Chen, Jiaxuan ;
Chen, Shuang ;
Liu, Yuyan ;
Chen, Xiaoxian ;
Yang, Yang ;
Zhang, Yungang .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 :1895-1908
[10]  
Cho MS, 2012, PROC CVPR IEEE, P606, DOI 10.1109/CVPR.2012.6247727