Regional Relation Modeling for Visual Place Recognition

被引:4
作者
Zhu, Yingying [1 ]
Li, Biao [1 ]
Wang, Jiong [2 ]
Zhao, Zhou [2 ]
机构
[1] Shenzhen Univ, Shenzhen, Peoples R China
[2] Zhejiang Univ, Hangzhou, Peoples R China
来源
PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20) | 2020年
基金
中国国家自然科学基金;
关键词
Visual place recognition; Content-based image retrieval; Convolutional neural network; Relation modeling; IMAGE; FEATURES;
D O I
10.1145/3397271.3401176
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the process of visual perception, humans perceive not only the appearance of objects existing in a place but also their relationships (e.g. spatial layout). However, the dominant works on visual place recognition are always based on the assumption that two images depict the same place if they contain enough similar objects, while the relation information is neglected. In this paper, we propose a regional relation module which models the regional relationships and converts the convolutional feature maps to the relational feature maps. We further design a cascaded pooling method to get discriminative relation descriptors by preventing the influence of confusing relations and preserving as much useful information as possible. Extensive experiments on two place recognition benchmarks demonstrate that training with the proposed regional relation module improves the appearance descriptors and the relation descriptors are complementary to appearance descriptors. When these two kinds of descriptors are concatenated together, the resulting combined descriptors outperform the state-of-the-art methods.
引用
收藏
页码:821 / 830
页数:10
相关论文
empty
未找到相关数据