Multi-Agent Deep Reinforcement Learning for Multi-Object Tracker

被引:36
作者
Jiang, Mingxin [1 ]
Hai, Tao [2 ]
Pan, Zhigeng [3 ]
Wang, Haiyan [1 ]
Jia, Yinjie [1 ]
Deng, Chao [4 ]
机构
[1] Huaiyin Inst Technol, Jiangsu Lab Lake Environm Remote Sensing Technol, Huaian 223003, Peoples R China
[2] Baoji Univ Arts & Sci, Comp Sci Dept, Baoji 721031, Peoples R China
[3] Hangzhou Normal Univ, Digital Media & Interact Res Ctr, Hangzhou 310012, Zhejiang, Peoples R China
[4] Henan Polytech Univ, Sch Phys & Elect Informat Engn, Jiaozuo 454000, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-object tracking; MADRL; IQL; YOLO V3; MULTITARGET TRACKING;
D O I
10.1109/ACCESS.2019.2901300
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-object tracking has been a key research subject in many computer vision applications. We propose a novel approach based on multi-agent deep reinforcement learning (MADRL) for multi-object tracking to solve the problems in the existing tracking methods, such as a varying number of targets, non-causal, and non-realtime. At first, we choose YOLO V3 to detect the objects included in each frame. Unsuitable candidates were screened out and the rest of detection results are regarded as multiple agents and forming a multi-agent system. Independent Q-Learners (IQL) is used to learn the agent's policy, in which, each agent treats other agents as part of the environment. Then, we conducted offline learning in the training and online learning during the tracking. Our experiments demonstrate that the use of MADRL achieves better performance than the other state-of-art methods in precision, accuracy, and robustness.
引用
收藏
页码:32400 / 32407
页数:8
相关论文
共 47 条
[1]  
[Anonymous], 2017, END TO END ACTIVE OB
[2]  
[Anonymous], COMPLEXITY
[3]  
[Anonymous], 2017, 2017 IEEE 9 INT C HU
[4]  
[Anonymous], IEEE T PATTERN ANAL
[5]  
[Anonymous], 2016, LOOK AHEAD YOU LEAP
[6]  
[Anonymous], 2017, DEEP REINFORCEMENT L
[7]  
[Anonymous], 2017, CVPR
[8]  
[Anonymous], ADV NEURAL INFORM PR, DOI DOI 10.1109/TPAMI.2016.2577031
[9]  
[Anonymous], 2017, P IEEE C COMP VIS PA
[10]  
[Anonymous], 2016, CVPR WORKSH