Convolutional neural network-based person tracking using overhead views

被引:36
作者
Ahmad, Misbah [1 ]
Ahmed, Imran [1 ]
Khan, Fakhri Alam [1 ]
Qayum, Fawad [2 ]
Aljuaid, Hanan [3 ]
机构
[1] Inst Management Sci, Ctr Excellence Informat Technol, Peshawar 25000, Kpk, Pakistan
[2] Univ Malakand, Dept Comp Sci & Informat Technol, Chakdara, Pakistan
[3] Princess Nourah Bint Abdulrahman Univ PNU, Coll Comp Sci & Informat Sci, Comp Sci Dept, Riyadh, Saudi Arabia
关键词
Convolutional neural network; person detection; person tracking; overhead views; Faster region convolutional neural network; Generic Object Tracking Using Regression Networks; VISUAL TRACKING; VIDEO;
D O I
10.1177/1550147720934738
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In video surveillance, person tracking is considered as challenging task. Numerous computer vision, machine and deep learning-based techniques have been developed in recent years. Majority of these techniques are based on frontal view images/video sequences. The advancement of convolutional neural network reforms the way of object tracking. The network layers of convolutional neural network models trained on a number of images or video sequences improve speed and accuracy of object tracking. In this work, the generalization performance of existing pre-trained deep learning models have investigated for overhead view person detection and tracking, under different experimental conditions. The object tracking method Generic Object Tracking Using Regression Networks (GOTURN) which has been yielding outstanding tracking results in recent years is explored for person tracking using overhead views. This work mainly focused on overhead view person tracking using Faster region convolutional neural network (Faster-RCNN) in combination with GOTURN architecture. In this way, the person is first identified in overhead view video sequences and then tracked using a GOTURN tracking algorithm. Faster-RCNN detection model achieved the true detection rate ranging from 90% to 93% with a minimum false detection rate up to 0.5%. The GOTURN tracking algorithm achieved similar results with the success rate ranging from 90% to 94%. Finally, the discussion is made on output results along with future direction.
引用
收藏
页数:12
相关论文
共 59 条
[1]  
Ahmad M, 2019, 2019 IEEE 10TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), P1082, DOI [10.1109/uemcon47517.2019.8993109, 10.1109/UEMCON47517.2019.8993109]
[2]  
Ahmad M, 2019, 2019 IEEE 10TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), P627, DOI [10.1109/uemcon47517.2019.8992980, 10.1109/UEMCON47517.2019.8992980]
[3]  
Ahmad M, 2018, 2018 9TH IEEE ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), P746, DOI 10.1109/UEMCON.2018.8796595
[4]  
Ahmad M, 2019, INT J ADV COMPUT SC, V10, P567
[5]  
Ahmad M, 2019, INT J ADV COMPUT SC, V10, P522
[6]   Spatial-prior generalized fuzziness extreme learning machine autoencoder-based active learning for hyperspectral image classification [J].
Ahmad, Muhammad ;
Shabbir, Sidrah ;
Oliva, Diego ;
Mazzara, Manuel ;
Distefano, Salvatore .
OPTIK, 2020, 206
[7]   Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers [J].
Ahmad, Muhammad ;
Protasov, Stanislav ;
Khan, Adil Mehmood ;
Hussain, Rasheed ;
Khattak, Asad Masood ;
Khan, Wajahat Ali .
PLOS ONE, 2018, 13 (01)
[8]   Exploring Deep Learning Models for Overhead View Multiple Object Detection [J].
Ahmed, Imran ;
Din, Sadia ;
Jeon, Gwanggil ;
Piccialli, Francesco .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07) :5737-5744
[9]   Person detector for different overhead views using machine learning [J].
Ahmed, Imran ;
Ahmad, Misbah ;
Adnan, Awais ;
Ahmad, Awais ;
Khan, Murad .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2019, 10 (10) :2657-2668
[10]   A robust algorithm for detecting people in overhead views [J].
Ahmed, Imran ;
Adnan, Awais .
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2018, 21 (01) :633-654