Grid-Guided Sparse Laplacian Consensus for Robust Feature Matching

被引:0
作者
Xia, Yifan [1 ]
Ma, Jiayi [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Laplace equations; Estimation; Coherence; Feature extraction; Robustness; Adaptation models; Accuracy; Navigation; Multi-layer neural network; Image registration; Feature matching; correspondence pruning; graph Laplacian; motion coherence;
D O I
10.1109/TIP.2025.3539469
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Feature matching is a fundamental concern widely employed in computer vision applications. This paper introduces a novel and efficacious method named Grid-guided Sparse Laplacian Consensus, rooted in the concept of smooth constraints. To address challenging scenes such as severe deformation and independent motions, we devise grid-based adaptive matching guidance to construct multiple transformations based on motion coherence. Specifically, we obtain a set of precise yet sparse seed correspondences through motion statistics, facilitating the generation of an adaptive number of candidate correspondence sets. In addition, we propose an innovative formulation grounded in graph Laplacian for correspondence pruning, wherein mapping function estimation is formulated as a Bayesian model. We solve this utilizing EM algorithm with seed correspondences as initialization for optimal convergence. Sparse approximation is leveraged to reduce the time-space burden. A comprehensive set of experiments are conducted to demonstrate the superiority of our method over other state-of-the-art methods in both robustness to serious deformations and generalizability for various descriptors, as well as generalizability to multi motions. Additionally, experiments in geometric estimation, image registration, loop closure detection, and visual localization highlight the significance of our method across diverse scenes for high-level tasks.
引用
收藏
页码:1367 / 1381
页数:15
相关论文
共 76 条
  • [1] Ma J., Jiang X., Fan A., Jiang J., Yan J., Image matching from handcrafted to deep features: A survey, Int. J. Comput. Vis., 129, 1, pp. 23-79, (2021)
  • [2] Jiang S., Jiang W., Guo B., Leveraging vocabulary tree for simultaneous match pair selection and guided feature matching of UAV images, ISPRS J. Photogramm. Remote Sens., 187, pp. 273-293, (2022)
  • [3] Brown M., Lowe D.G., Automatic panoramic image stitching using invariant features, Int. J. Comput. Vis., 74, 1, pp. 59-73, (2007)
  • [4] Zhang K., Jiang X., Ma J., Appearance-based loop closure detection via locality-driven accurate motion field learning, IEEE Trans. Intell. Transp. Syst., 23, 3, pp. 2350-2365, (2022)
  • [5] Mur-Artal R., Montiel J.M.M., Tardos J.D., ORB-SLAM: A versatile and accurate monocular SLAM system, IEEE Trans. Robot., 31, 5, pp. 1147-1163, (2015)
  • [6] Lowe D.G., Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis., 60, 2, pp. 91-110, (2004)
  • [7] Mishchuk A., Mishkin D., Radenovic F., Matas J., Working hard to know your neighbor's margins: Local descriptor learning loss, Proc. Adv. Neural Inf. Process. Syst., 30, pp. 4829-4840, (2017)
  • [8] Detone D., Malisiewicz T., Rabinovich A., SuperPoint: Selfsupervised interest point detection and description, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), pp. 224-236, (2018)
  • [9] Fischler M.A., Bolles R., Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM, 24, 6, pp. 381-395, (1981)
  • [10] Torr P.H.S., Zisserman A., MLESAC: A new robust estimator with application to estimating image geometry, Comput. Vis. Image Understand., 78, 1, pp. 138-156, (2000)