Using scale-equivariant CNN to enhance scale robustness in feature matching

被引:0
|
作者
Liao, Yun [1 ,2 ,3 ]
Liu, Peiyu [1 ,3 ]
Wu, Xuning [1 ,3 ]
Pan, Zhixuan [1 ,3 ]
Zhu, Kaijun [2 ]
Zhou, Hao [2 ]
Liu, Junhui [1 ,3 ]
Duan, Qing [1 ,3 ]
机构
[1] Yunnan Univ, Natl Pilot Sch Software, Kunming 650106, Yunnan, Peoples R China
[2] Yunnan Lanyi Network Technol Co, Kunming 650000, Yunnan, Peoples R China
[3] Yunnan Key Lab Software Engn, Kunming, Yunnan, Peoples R China
来源
VISUAL COMPUTER | 2024年 / 40卷 / 10期
关键词
Image matching; Feature matching; Scale-equivariance; Transformer;
D O I
10.1007/s00371-024-03389-0
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Image matching is an important task in computer vision. The detector-free dense matching method is an important research direction of image matching due to its high accuracy and robustness. The classical detector-free image matching methods utilize convolutional neural networks to extract features and then match them. Due to the lack of scale equivariance in CNNs, this method often exhibits poor matching performance when the images to be matched undergo significant scale variations. However, large-scale variations are very common in practical problems. To solve the above problem, we propose SeLFM, a method that combines scale equivariance and the global modeling capability of transformer. The two main advantages of this method are scale-equivariant CNNs can extract scale-equivariant features, while transformer also brings global modeling capability. Experiments prove that this modification improves the performance of the matcher in matching image pairs with large-scale variations and does not affect the general matching performance of the matcher. The code will be open-sourced at this link: https://github.com/LiaoYun0x0/SeLFM/tree/main
引用
收藏
页码:7307 / 7322
页数:16
相关论文
共 50 条
  • [31] Densely Connected CNN with Multi-scale Feature Attention for Text Classification
    Wang, Shiyao
    Huang, Minlie
    Deng, Zhidong
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 4468 - 4474
  • [32] Gated CNN: Integrating multi-scale feature layers for object detection
    Yuan, Jin
    Xiong, Heng-Chang
    Xiao, Yi
    Guan, Weili
    Wang, Meng
    Hong, Richang
    Li, Zhi-Yong
    PATTERN RECOGNITION, 2020, 105
  • [33] Fast Scene Matching Method Based on Scale Invariant Feature Transform
    Niu Yanxiong
    Chen Mengqi
    Zhang He
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2019, 41 (03) : 626 - 631
  • [34] Fast Scene Matching Method Based on Scale Invariant Feature Transform
    Niu Y.
    Chen M.
    Zhang H.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2019, 41 (03): : 626 - 631
  • [35] The Scale and Characteristics Strength of SURF Feature Points Adaptive Matching Algorithm
    Hu, Xiaotong
    Ren, Hui
    Liu, Nan
    PROCEEDINGS OF THE ADVANCES IN MATERIALS, MACHINERY, ELECTRICAL ENGINEERING (AMMEE 2017), 2017, 114 : 870 - 876
  • [36] Adaptive patch feature matching and scale estimation for visual object tracking
    Vadamala, Purandhar Reddy
    Aklak, Annis Fathima
    JOURNAL OF ELECTRONIC IMAGING, 2019, 28 (03)
  • [37] Multi-Scale Feature Selective Matching Network for Object Detection
    Pei, Yuanhua
    Dong, Yongsheng
    Zheng, Lintao
    Ma, Jinwen
    MATHEMATICS, 2023, 11 (12)
  • [38] Fast Scale Invariant Feature Detection and Matching on Programmable Graphics Hardware
    Cornelis, Nico
    Van Gool, Luc
    2008 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, VOLS 1-3, 2008, : 1013 - 1020
  • [39] Dynamic link matching between feature columns for different scale and orientation
    Sato, Yasuomi D.
    Wolff, Christian
    Wolfrum, Philipp
    von der Malsburg, Christoph
    NEURAL INFORMATION PROCESSING, PART I, 2008, 4984 : 385 - 394
  • [40] Scale Adaptive Mean Shift Tracking Based on Feature Point Matching
    Song, Yi
    Li, Shuxiao
    Chang, Hongxing
    2013 SECOND IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR 2013), 2013, : 220 - 224