Long Short-Term Memory Networks Based on Particle Filter for Object Tracking

被引:3
作者
Liu, Yanli [1 ,2 ]
Cheng, Jingjing [1 ]
Zhang, Heng [1 ,2 ]
Zou, Hang [3 ]
Xiong, Naixue [4 ,5 ]
机构
[1] East China Jiaotong Univ, Sch Informat Engn, Nanchang 330013, Jiangxi, Peoples R China
[2] Shanghai Dianji Univ, Sch Elect Informat, Shanghai 201306, Peoples R China
[3] Wuhan Res Inst Posts & Telecommun, Wuhan 430074, Peoples R China
[4] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300350, Peoples R China
[5] Northeastern State Univ, Dept Math & Comp Sci, Tahlequah, OK 74464 USA
基金
中国国家自然科学基金;
关键词
Object tracking; Uncertainty; Prediction algorithms; Particle filters; Feature extraction; Video sequences; Trajectory; particle filter; deep neural network; long short-term memory;
D O I
10.1109/ACCESS.2020.3041294
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the uncertainty of object motion, object tracking is a more difficult state estimation problem. The traditional tracking method based on particle filter has come into wide use, but it has high complexity and poor real-time performance in the process of tracking. As long as there are enough training data, the method based on deep neural network can fit any mapping well. In this paper, a structured Long Short-Term Memory Network based on Particle Filter(LSTM-PF) is proposed to learn and model video sequences with high uncertainty. This network draws on the idea of particle filter, which uses a set of weighted particles to approximate the latent variable and updates the latent state distribution through the LSTM gating structure according to Bayesian rules. We conduct a comprehensive experiment on two benchmark datasets: OTB100 and VOT2016. The experimental results show that our tracker has better performance than other trackers, which can effectively reduce the calculation redundancy and improve the tracking accuracy.
引用
收藏
页码:216245 / 216258
页数:14
相关论文
共 48 条
  • [1] [Anonymous], 2006, P BRIT MACHINE VISIO
  • [2] [Anonymous], 2017, P IEEE INT S CIRC SY, DOI DOI 10.1109/ISCAS.2017.8050867
  • [3] Fully-Convolutional Siamese Networks for Object Tracking
    Bertinetto, Luca
    Valmadre, Jack
    Henriques, Joao F.
    Vedaldi, Andrea
    Torr, Philip H. S.
    [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 : 850 - 865
  • [4] Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
  • [5] Bharati SP, 2016, PROC INT C TOOLS ART, P706, DOI [10.1109/ICTAI.2016.0112, 10.1109/ICTAI.2016.109]
  • [6] 23 Ways to Nudge: A Review of Technology-Mediated Nudging in Human-Computer Interaction
    Caraban, Ana
    Karapanos, Evangelos
    Goncalves, Daniel
    Campos, Pedro
    [J]. CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [7] Chang Jae-Young, 2019, [The Journal of The Institute of Internet, Broadcasting and Communication, 한국인터넷방송통신학회 논문지], V19, P161, DOI 10.7236/JIIBC.2019.19.2.161
  • [8] Multi attention module for visual tracking
    Chen, Boyu
    Li, Peixia
    Sun, Chong
    Wang, Dong
    Yang, Gang
    Lu, Huchuan
    [J]. PATTERN RECOGNITION, 2019, 87 : 80 - 93
  • [9] Siamese Box Adaptive Network for Visual Tracking
    Chen, Zedu
    Zhong, Bineng
    Li, Guorong
    Zhang, Shengping
    Ji, Rongrong
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6667 - 6676
  • [10] Recurrently Target-Attending Tracking
    Cui, Zhen
    Xiao, Shengtao
    Feng, Jiashi
    Yan, Shuicheng
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1449 - 1458