E-VAT: An Asymmetric End-to-End Approach to Visual Active Exploration and Tracking

被引:13
|
作者
Dionigi, Alberto [1 ]
Devo, Alessandro [1 ]
Guiducci, Leonardo [1 ]
Costante, Gabriele [1 ]
机构
[1] Univ Perugia, Dept Engn, I-06125 Perugia, Italy
关键词
Target tracking; Visualization; Cameras; Robots; Task analysis; Space exploration; Reinforcement learning; Visual tracking; reinforcement learning; deep learning for visual perception; NAVIGATION;
D O I
10.1109/LRA.2022.3150866
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The development of visual tracking systems is becoming a major goal for the Robotics community. Most of the works dealing with this topic focus exclusively on passive tracking, where the target is confined within the camera's field of view. Only a minority propose active approaches, capable not only of identifying the object to be tracked but also of producing motion control actions to maintain visual contact with it. However, all the methods introduced so far assume that the target is initially in the immediate proximity of the tracker. This represents an undesirable constraint on the applicability of these techniques, and it is to overcome this limitation that we propose a novel End-to-End Deep Reinforcement Learning based system, capable of both exploring the surrounding environment to find the target and then of tracking it. To do this, we develop a network consisting of two sub-components: i) the Target-Detection Network, which detects the target in the camera's field-of-view, and ii) the Exploration and Tracking Network, which employs this information to switch between the exploration policy and the tracking policy with the goal of exploring the environment, finding the target and finally tracking it. Through different experiments, we demonstrate the effectiveness of our approach and its superior performance with respect to current state-of-the-art (SotA) methods.
引用
收藏
页码:4259 / 4266
页数:8
相关论文
共 50 条
  • [1] D-VAT: End-to-End Visual Active Tracking for Micro Aerial Vehicles
    Dionigi, Alberto
    Felicioni, Simone
    Leomanni, Mirko
    Costante, Gabriele
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (06): : 5046 - 5053
  • [2] Enhancing continuous control of mobile robots for end-to-end visual active tracking
    Devo, Alessandro
    Dionigi, Alberto
    Costante, Gabriele
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2021, 142
  • [3] End-to-end DeepNCC framework for robust visual tracking
    Dai, Kaiheng
    Wang, Yuehuan
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 70
  • [4] End-to-end deep metric network for visual tracking
    Tian, Shengjing
    Shen, Shuwei
    Tian, Guoqiang
    Liu, Xiuping
    Yin, Baocai
    VISUAL COMPUTER, 2020, 36 (06): : 1219 - 1232
  • [5] End-to-end deep metric network for visual tracking
    Shengjing Tian
    Shuwei Shen
    Guoqiang Tian
    Xiuping Liu
    Baocai Yin
    The Visual Computer, 2020, 36 : 1219 - 1232
  • [6] End-to-End Policy Learning for Active Visual Categorization
    Jayaraman, Dinesh
    Grauman, Kristen
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (07) : 1601 - 1614
  • [7] Learning reinforced attentional representation for end-to-end visual tracking
    Gao, Peng
    Zhang, Qiquan
    Wang, Fei
    Xiao, Liyi
    Fujita, Hamido
    Zhang, Yan
    INFORMATION SCIENCES, 2020, 517 : 52 - 67
  • [8] End-to-end Visual Object Tracking with Motion Saliency Guidance
    Zhang, Yucheng
    Liu, Kexin
    Wang, Tian
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 6566 - 6571
  • [9] End-to-end Active Object Tracking via Reinforcement Learning
    Luo, Wenhan
    Sun, Peng
    Zhong, Fangwei
    Liu, Wei
    Zhang, Tong
    Wang, Yizhou
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [10] Tracking Ransomware End-to-end
    Huang, Danny Yuxing
    Aliapoulios, Maxwell Matthaios
    Li, Vector Guo
    Invernizzi, Luca
    McRoberts, Kylie
    Bursztein, Elie
    Levin, Jonathan
    Levchenko, Kirill
    Snoeren, Alex C.
    McCoy, Damon
    2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, : 618 - 631