Decision Controller for Object Tracking With Deep Reinforcement Learning

被引:12
|
作者
Zhong, Zhao [1 ,2 ]
Yang, Zichen [3 ]
Feng, Weitao [3 ]
Wu, Wei [3 ]
Hu, Yangyang [3 ]
Liu, Cheng-Lin [1 ,4 ]
机构
[1] Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[3] Sensetime Res Inst, Beijing 100084, Peoples R China
[4] Univ Chinese Acad Sci, CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Computer vision; deep learning; object tracking; reinforcement learning;
D O I
10.1109/ACCESS.2019.2900476
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
There are many decisions which are usually made heuristically both in single object tracking (SOT) and multiple object tracking (MOT). Existing methods focus on tackling decision-making problems on special tasks in tracking without a unified framework. In this paper, we propose a decision controller (DC) which is generally applicable to both SOT and MOT tasks. The controller learns an optimal decision-making policy with a deep reinforcement learning algorithm that maximizes long term tracking performance without supervision. To prove the generalization ability of DC, we apply it to the challenging ensemble problem in SOT and tracker-detector switching problem in MOT. In the tracker ensemble experiment, our ensemble-based tracker can achieve leading performance in VOT2016 challenge and the light version can also get a state-of-the-art result at 50 FPS. In the MOT experiment, we utilize the tracker-detector switching controller to enable real-time online tracking with competitive performance and 10x speed up.
引用
收藏
页码:28069 / 28079
页数:11
相关论文
共 50 条
  • [41] Deep reinforcement learning based moving object grasping
    Chen, Pengzhan
    Lu, Weiqing
    INFORMATION SCIENCES, 2021, 565 : 62 - 76
  • [42] Selective Spatial Regularization by Reinforcement Learned Decision Making for Object Tracking
    Guo, Qing
    Han, Ruize
    Feng, Wei
    Chen, Zhihao
    Wan, Liang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 2999 - 3013
  • [43] Deep Learning and Preference Learning for Object Tracking: A Combined Approach
    Pang, Shuchao
    Jose del Coz, Juan
    Yu, Zhezhou
    Luaces, Oscar
    Diez, Jorge
    NEURAL PROCESSING LETTERS, 2018, 47 (03) : 859 - 876
  • [44] Deep Learning and Preference Learning for Object Tracking: A Combined Approach
    Shuchao Pang
    Juan José del Coz
    Zhezhou Yu
    Oscar Luaces
    Jorge Díez
    Neural Processing Letters, 2018, 47 : 859 - 876
  • [45] Deep Reinforcement Learning with Iterative Shift for Visual Tracking
    Ren, Liangliang
    Yuan, Xin
    Lu, Jiwen
    Yang, Ming
    Zhou, Jie
    COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 : 697 - 713
  • [46] Visual Tracking via Hierarchical Deep Reinforcement Learning
    Zhang, Dawei
    Zheng, Zhonglong
    Jia, Riheng
    Li, Minglu
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 3315 - 3323
  • [47] On the Use of Deep Reinforcement Learning for Visual Tracking: A Survey
    Cruciata, Giorgio
    Lo Presti, Liliana
    La Cascia, Marco
    IEEE ACCESS, 2021, 9 : 120880 - 120900
  • [48] Exploring Deep Reinforcement Learning for Autonomous Powerline Tracking
    Pienroj, Panin
    Schonborn, Sandro
    Birke, Robert
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM 2019 WKSHPS), 2019, : 496 - 501
  • [49] Deep Reinforcement Learning for Data Association in Cell Tracking
    Wang Junjie
    Su Xiaohong
    Zhao Lingling
    Zhang Jun
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2020, 8
  • [50] Drift-Proof Tracking With Deep Reinforcement Learning
    Chen, Zhongze
    Li, Jing
    Wu, Jia
    Chang, Jun
    Xiao, Yafu
    Wang, Xiaoting
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 609 - 624