Decision Controller for Object Tracking With Deep Reinforcement Learning

被引:12
|
作者
Zhong, Zhao [1 ,2 ]
Yang, Zichen [3 ]
Feng, Weitao [3 ]
Wu, Wei [3 ]
Hu, Yangyang [3 ]
Liu, Cheng-Lin [1 ,4 ]
机构
[1] Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[3] Sensetime Res Inst, Beijing 100084, Peoples R China
[4] Univ Chinese Acad Sci, CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Computer vision; deep learning; object tracking; reinforcement learning;
D O I
10.1109/ACCESS.2019.2900476
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
There are many decisions which are usually made heuristically both in single object tracking (SOT) and multiple object tracking (MOT). Existing methods focus on tackling decision-making problems on special tasks in tracking without a unified framework. In this paper, we propose a decision controller (DC) which is generally applicable to both SOT and MOT tasks. The controller learns an optimal decision-making policy with a deep reinforcement learning algorithm that maximizes long term tracking performance without supervision. To prove the generalization ability of DC, we apply it to the challenging ensemble problem in SOT and tracker-detector switching problem in MOT. In the tracker ensemble experiment, our ensemble-based tracker can achieve leading performance in VOT2016 challenge and the light version can also get a state-of-the-art result at 50 FPS. In the MOT experiment, we utilize the tracker-detector switching controller to enable real-time online tracking with competitive performance and 10x speed up.
引用
收藏
页码:28069 / 28079
页数:11
相关论文
共 50 条
  • [11] Action-Decision Networks for Visual Tracking with Deep Reinforcement Learning
    Yun, Sangdoo
    Choi, Jongwon
    Yoo, Youngjoon
    Yun, Kimin
    Choi, Jin Young
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1349 - 1358
  • [12] Safe Decision Controller for Autonomous DrivingBased on Deep Reinforcement Learning inNondeterministic Environment
    Chen, Hongyi
    Zhang, Yu
    Bhatti, Uzair Aslam
    Huang, Mengxing
    SENSORS, 2023, 23 (03)
  • [13] Reinforcement Learning inspired Deep Learned Compositional Model for Decision Making in Tracking
    Chakraborty, Anit
    Dutta, Sayandip
    Bhattacharyya, Siddhartha
    Platos, Jan
    Snasel, Vaclav
    2018 FOURTH IEEE INTERNATIONAL CONFERENCE ON RESEARCH IN COMPUTATIONAL INTELLIGENCE AND COMMUNICATION NETWORKS (ICRCICN), 2018, : 158 - 163
  • [14] Tracker-Level Decision by Deep Reinforcement Learning for Robust Visual Tracking
    Huang, Wenju
    Wu, Yuwei
    Jia, Yunde
    IMAGE AND GRAPHICS, ICIG 2019, PT I, 2019, 11901 : 442 - 453
  • [15] Object tracking: Feature selection by reinforcement learning
    Deng, Jiali
    Gong, Haigang
    Liu, Minghui
    Liu, Ming
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155
  • [16] Deep Reinforcement Learning based Dynamic Object Detection and Tracking from a Moving Platform
    Shinde, Chinmay
    Lima, Rolif
    Das, Kaushik
    2019 SIXTH INDIAN CONTROL CONFERENCE (ICC), 2019, : 244 - 249
  • [17] Active object tracking of free floating space manipulators based on deep reinforcement learning
    Lei, Wenxiao
    Fu, Hao
    Sun, Guanghui
    ADVANCES IN SPACE RESEARCH, 2022, 70 (11) : 3506 - 3519
  • [18] Deep learning application on object tracking
    Taglout, Ramdane
    Saoud, Bilal
    PRZEGLAD ELEKTROTECHNICZNY, 2023, 99 (09): : 145 - 149
  • [19] Hand-Object Interaction Controller (HOIC): Deep Reinforcement Learning for Reconstructing Interactions with Physics
    Hu, Haoyu
    Yi, Xinyu
    Cao, Zhe
    Yong, Jun-Hai
    Xu, Feng
    PROCEEDINGS OF SIGGRAPH 2024 CONFERENCE PAPERS, 2024,
  • [20] Multitask Learning for Object Localization With Deep Reinforcement Learning
    Wang, Yan
    Zhang, Lei
    Wang, Lituan
    Wang, Zizhou
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2019, 11 (04) : 573 - 580