Task-Oriented Communication for Edge Video Analytics

被引:10
作者
Shao, Jiawei [1 ]
Zhang, Xinjie [1 ]
Zhang, Jun [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
关键词
Task analysis; Visual analytics; Feature extraction; Image edge detection; Servers; Entropy; Analytical models; Task-oriented communication; edge video analytics; deterministic information bottleneck; temporal entropy model;
D O I
10.1109/TWC.2023.3314888
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the development of artificial intelligence (AI) techniques and the increasing popularity of camera-equipped devices, many edge video analytics applications are emerging, calling for the deployment of computation-intensive AI models at the network edge. Edge inference is a promising solution to move computation-intensive workloads from low-end devices to a powerful edge server for video analytics, but device-server communications will remain a bottleneck due to limited bandwidth. This paper proposes a task-oriented communication framework for edge video analytics, where multiple devices collect the visual sensory data and transmit the informative features to an edge server for processing. To enable low-latency inference, this framework removes video redundancy in spatial and temporal domains and transmits minimal information that is essential for the downstream task, rather than reconstructing the videos on the edge server. Specifically, it extracts compact task-relevant features based on the deterministic information bottleneck (IB) principle, which characterizes a tradeoff between the informativeness of the features and the communication cost. As the features of consecutive frames are temporally correlated, we propose a temporal entropy model (TEM) to reduce the bitrate by taking the previous features as side information in feature encoding. To further improve the inference performance, we build a spatial-temporal fusion module on the server to integrate features of the current and previous frames for joint inference. Extensive experiments on video analytics tasks evidence that the proposed framework effectively encodes task-relevant information of video data and achieves a better rate-performance tradeoff than existing methods.
引用
收藏
页码:4141 / 4154
页数:14
相关论文
共 60 条
[1]   Scale-space flow for end-to-end optimized video compression [J].
Agustsson, Eirikur ;
Minnen, David ;
Johnston, Nick ;
Balle, Johannes ;
Hwang, Sung Jin ;
Toderici, George .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :8500-8509
[2]   Real-Time Video Analytics: The Killer App for Edge Computing [J].
Ananthanarayanan, Ganesh ;
Bahl, Paramvir ;
Bodik, Peter ;
Chintalapudi, Krishna ;
Philipose, Matthai ;
Ravindranath, Lenin ;
Sinha, Sudipta .
COMPUTER, 2017, 50 (10) :58-67
[3]  
Balle D., 2018, PROC INT C LEARN RE, P1
[4]  
Balle J., 2017, P INT C LEARN REP, P1
[5]  
Begaint Jean, 2020, arXiv
[6]   Deep Joint Source-Channel Coding for Wireless Image Transmission [J].
Bourtsoulatze, Eirina ;
Kurka, David Burth ;
Gunduz, Deniz .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2019, 5 (03) :567-579
[7]   AI City Challenge 2020-Computer Vision for Smart Transportation Applications [J].
Chang, Ming-Ching ;
Chiang, Chen-Kuo ;
Tsai, Chun-Ming ;
Chang, Yun-Kai ;
Chiang, Hsuan-Lun ;
Wang, Yu-An ;
Chang, Shih-Ya ;
Li, Yun-Lun ;
Tsai, Ming-Shuin ;
Tseng, Hung-Yu .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :2638-2647
[8]   WILDTRACK: A Multi-camera HD Dataset for Dense Unscripted Pedestrian Detection [J].
Chavdarova, Tatjana ;
Baque, Pierre ;
Bouquet, Stephane ;
Maksai, Andrii ;
Jose, Cijo ;
Bagautdinov, Timur ;
Lettry, Louis ;
Fua, Pascal ;
Van Gool, Luc ;
Fleuret, Francois .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :5030-5039
[9]  
Choi Z., 2019, PROC ACMIEEE DESIG, P1
[10]  
Dubois Y., 2021, PROC INT C LEARN RE, P1