Explicit feature disentanglement for visual place recognition across appearance changes

被引:2
作者
Tang, Li [1 ]
Wang, Yue [1 ]
Tan, Qimeng [2 ]
Xiong, Rong [1 ]
机构
[1] Zhejiang Univ, Dept Control Sci & Engn, Hangzhou 30012, Peoples R China
[2] Beijing Inst Spacecraft Syst Engn, Beijing Key Lab Intelligent Space Robot Syst Tech, Beijing, Peoples R China
关键词
Place recognition; feature disentanglement; adversarial; self-supervised; changing appearance; SIMULTANEOUS LOCALIZATION; NAVIGATION; SLAM;
D O I
10.1177/17298814211037497
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In the long-term deployment of mobile robots, changing appearance brings challenges for localization. When a robot travels to the same place or restarts from an existing map, global localization is needed, where place recognition provides coarse position information. For visual sensors, changing appearances such as the transition from day to night and seasonal variation can reduce the performance of a visual place recognition system. To address this problem, we propose to learn domain-unrelated features across extreme changing appearance, where a domain denotes a specific appearance condition, such as a season or a kind of weather. We use an adversarial network with two discriminators to disentangle domain-related features and domain-unrelated features from images, and the domain-unrelated features are used as descriptors in place recognition. Provided images from different domains, our network is trained in a self-supervised manner which does not require correspondences between these domains. Besides, our feature extractors are shared among all domains, making it possible to contain more appearance without increasing model complexity. Qualitative and quantitative results on two toy cases are presented to show that our network can disentangle domain-related and domain-unrelated features from given data. Experiments on three public datasets and one proposed dataset for visual place recognition are conducted to illustrate the performance of our method compared with several typical algorithms. Besides, an ablation study is designed to validate the effectiveness of the introduced discriminators in our network. Additionally, we use a four-domain dataset to verify that the network can extend to multiple domains with one model while achieving similar performance.
引用
收藏
页数:19
相关论文
共 73 条
  • [21] Hausler S, 2019, IEEE INT C INT ROBOT, P3268, DOI [10.1109/iros40897.2019.8967783, 10.1109/IROS40897.2019.8967783]
  • [22] Higgins I., 2018, Towards a definition of disentangled representations
  • [23] Reducing the dimensionality of data with neural networks
    Hinton, G. E.
    Salakhutdinov, R. R.
    [J]. SCIENCE, 2006, 313 (5786) : 504 - 507
  • [24] TextPlace: Visual Place Recognition and Topological Localization Through Reading Scene Texts
    Hong, Ziyang
    Petillot, Yvan
    Lane, David
    Miao, Yishu
    Wang, Sen
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2861 - 2870
  • [25] Hu HJ, 2019, IEEE INT C INT ROBOT, P3684, DOI [10.1109/iros40897.2019.8968047, 10.1109/IROS40897.2019.8968047]
  • [26] Huang XW, 2018, PROCEEDINGS OF 2018 IEEE INTERNATIONAL CONFERENCE ON INTEGRATED CIRCUITS, TECHNOLOGIES AND APPLICATIONS (ICTA 2018), P172, DOI 10.1109/CICTA.2018.8706048
  • [27] Image-to-Image Translation with Conditional Adversarial Networks
    Isola, Phillip
    Zhu, Jun-Yan
    Zhou, Tinghui
    Efros, Alexei A.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5967 - 5976
  • [28] Jégou H, 2010, PROC CVPR IEEE, P3304, DOI 10.1109/CVPR.2010.5540039
  • [29] Perceptual Losses for Real-Time Style Transfer and Super-Resolution
    Johnson, Justin
    Alahi, Alexandre
    Li Fei-Fei
    [J]. COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 : 694 - 711
  • [30] Kanji, 2019, ARXIV PREPRINT ARXIV