VaBUS: Edge-Cloud Real-Time Video Analytics via Background Understanding and Subtraction

被引:17
作者
Wang, Hanling [1 ,2 ]
Li, Qing [2 ]
Sun, Heyang [3 ]
Chen, Zuozhou [2 ]
Hao, Yingqian [4 ]
Peng, Junkun [1 ,2 ]
Yuan, Zhenhui [5 ]
Fu, Junsheng [6 ]
Jiang, Yong [1 ,2 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen 518055, Guangdong, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518055, Guangdong, Peoples R China
[3] Southeast Univ, Sch Software, Nanjing 211189, Jiangsu, Peoples R China
[4] Jilin Univ, Sch Software, Changchun 130012, Jilin, Peoples R China
[5] Northumbria Univ, Dept Comp & Informat Sci, Newcastle Upon Tyne NE1 8ST, Tyne & Wear, England
[6] Zenseact, S-41756 Gothenburg, Sweden
基金
中国国家自然科学基金;
关键词
Edge-cloud collaborative computing; semantic compression; video analytics; task-oriented communication system; NEURAL-NETWORK; EFFICIENT;
D O I
10.1109/JSAC.2022.3221995
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Edge-cloud collaborative video analytics is transforming the way data is being handled, processed, and transmitted from the ever-growing number of surveillance cameras around the world. To avoid wasting limited bandwidth on unrelated content transmission, existing video analytics solutions usually perform temporal or spatial filtering to realize aggressive compression of irrelevant pixels. However, most of them work in a context-agnostic way while being oblivious to the circumstances where the video content is happening and the context-dependent characteristics under the hood. In this work, we propose VaBUS, a real-time video analytics system that leverages the rich contextual information of surveillance cameras to reduce bandwidth consumption for semantic compression. As a task-oriented communication system, VaBUS dynamically maintains the background image of the video on the edge with minimal system overhead and sends only highly confident Region of Interests (RoIs) to the cloud through adaptive weighting and encoding. With a lightweight experience-driven learning module, VaBUS is able to achieve high offline inference accuracy even when network congestion occurs. Experimental results show that VaBUS reduces bandwidth consumption by 25.0%-76.9% while achieving 90.7% accuracy for both the object detection and human keypoint detection tasks.
引用
收藏
页码:90 / 106
页数:17
相关论文
共 88 条
[1]  
Afsana F., 2021, IEEE T PATTERN ANAL, V9
[2]  
Agarwal N., 2021, arXiv
[3]   Fusion-based foreground enhancement for background subtraction using multivariate multi-model Gaussian distribution [J].
Akilan, Thangarajah ;
Wu, Q. M. Jonathan ;
Yang, Yimin .
INFORMATION SCIENCES, 2018, 430 :414-431
[4]  
Al-Saffar AAM, 2017, 2017 INTERNATIONAL CONFERENCE ON RADAR, ANTENNA, MICROWAVE, ELECTRONICS, AND TELECOMMUNICATIONS (ICRAMET), P26, DOI 10.1109/ICRAMET.2017.8253139
[5]   Real-Time Video Analytics: The Killer App for Edge Computing [J].
Ananthanarayanan, Ganesh ;
Bahl, Paramvir ;
Bodik, Peter ;
Chintalapudi, Krishna ;
Philipose, Matthai ;
Ravindranath, Lenin ;
Sinha, Sudipta .
COMPUTER, 2017, 50 (10) :58-67
[6]   A deep convolutional neural network for video sequence background subtraction [J].
Babaee, Mohammadreza ;
Duc Tung Dinh ;
Rigoll, Gerhard .
PATTERN RECOGNITION, 2018, 76 :635-649
[7]  
Bakkay MC, 2018, IEEE IMAGE PROC, P4018, DOI 10.1109/ICIP.2018.8451603
[8]  
Barthelemy J., 2019, SENSORS-BASEL, V19, P1
[9]  
Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
[10]  
Braham M, 2016, INT CONF SYST SIGNAL, P113