Learning attention for object tracking with adversarial learning network

被引:5
作者
Cheng, Xu [1 ,2 ,3 ]
Song, Chen [1 ]
Gu, Yongxiang [1 ]
Chen, Beijing [1 ,2 ,3 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Jiangsu Key Lab Big Data Anal Technol, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Engn Res Ctr Digital Forens, Minist Educ, Nanjing 210044, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Surveillance; Deep learning; Object tracking; Generative adversarial learning; RECOGNITION; FILTERS; IMAGES;
D O I
10.1186/s13640-020-00535-1
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Artificial intelligence has been widely studied on solving intelligent surveillance analysis and security problems in recent years. Although many multimedia security approaches have been proposed by using deep learning network model, there are still some challenges on their performances which deserve in-depth research. On the one hand, high computational complexity of current deep learning methods makes it hard to be applied to real-time scenario. On the other hand, it is difficult to obtain the specific features of a video by fine-tuning the network online with the object state of the first frame, which fails to capture rich appearance variations of the object. To solve above two issues, in this paper, an effective object tracking method with learning attention is proposed to achieve the object localization and reduce the training time in adversarial learning framework. First, a prediction network is designed to track the object in video sequences. The object positions of the first ten frames are employed to fine-tune prediction network, which can fully mine a specific features of an object. Second, the prediction network is integrated into the generative adversarial network framework, which randomly generates masks to capture object appearance variations via adaptively dropout input features. Third, we present a spatial attention mechanism to improve the tracking performance. The proposed network can identify the mask that maintains the most robust features of the objects over a long temporal span. Extensive experiments on two large-scale benchmarks demonstrate that the proposed algorithm performs favorably against state-of-the-art methods.
引用
收藏
页数:21
相关论文
共 62 条
[1]  
Ababsa F, 2009, ROBUST EXTENDED KALM, P1834
[2]  
[Anonymous], 2019, TARGET AWARE DEEP TR
[3]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.465
[4]  
[Anonymous], 2015, ARXIV150104587
[5]   A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking [J].
Arulampalam, MS ;
Maskell, S ;
Gordon, N ;
Clapp, T .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2002, 50 (02) :174-188
[6]   Staple: Complementary Learners for Real-Time Tracking [J].
Bertinetto, Luca ;
Valmadre, Jack ;
Golodetz, Stuart ;
Miksik, Ondrej ;
Torr, Philip H. S. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1401-1409
[7]   Fully-Convolutional Siamese Networks for Object Tracking [J].
Bertinetto, Luca ;
Valmadre, Jack ;
Henriques, Joao F. ;
Vedaldi, Andrea ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 :850-865
[8]  
Bhat G, 2018, UNVEILING POWER DEEP, P483
[9]  
Bolme DS, 2010, PROC CVPR IEEE, P2544, DOI 10.1109/CVPR.2010.5539960
[10]   Visual Tracking via Auto-Encoder Pair Correlation Filter [J].
Cheng, Xu ;
Zhang, Yifeng ;
Zhou, Lin ;
Zheng, Yuhui .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2020, 67 (04) :3288-3297