Towards automatic model specialization for edge video analytics

被引:6
作者
Rivas, Daniel [1 ,2 ]
Guim, Francesc [3 ]
Polo, Jorda [1 ]
Silva, Pubudu M. [4 ]
Berral, Josep Ll. [1 ]
Carrera, David [1 ]
机构
[1] Barcelona Supercomp Ctr BSC, C Jordi Girona 1-3, Barcelona 08034, Spain
[2] UPC, Campus Nord,Edif D6,C Jordi Girona 1-3, Barcelona 08034, Spain
[3] Intel Corp Iberia, Barcelona, Spain
[4] Intel Corp, Hillsboro, OR USA
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2022年 / 134卷
关键词
Model specialization; Computer vision; Edge cloud; Cova framework; Real-time video analytics;
D O I
10.1016/j.future.2022.03.039
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The number of cameras deployed to the edge of the network increases by the day, while emerging use cases, such as smart cities or autonomous driving, also grow to expect images to be analyzed in real-time by increasingly accurate and complex neural networks. Unfortunately, state-of-the-art accuracy comes at a computational cost rarely available in the edge cloud. At the same time, due to strict latency constraints and the vast amount of bandwidth edge cameras generate, we can no longer rely on offloading the task to a centralized cloud. Consequently, there is a need for a meeting point between the resource-constrained edge cloud and accurate real-time video analytics. If state-of-the-art models are too expensive to run on the edge, and lightweight models are not accurate enough for the use cases in the edge, one solution is to demand less from the lightweight model and specialize it in a narrower scope of the problem, a technique known as model specialization. By specializing a model to the context of a single camera, we can boost its accuracy while keeping its computational cost constant. However, this also involves one training per camera, which quickly becomes unfeasible unless the entire process is fully automated. In this paper, we present and evaluate COVA (Contextually Optimized Video Analytics), a framework to assist in the automatic specialization of models for video analytics in edge cloud cameras. COVA aims to automatically improve the accuracy of lightweight models by specializing them to the context to which they will be deployed. Moreover, we discuss and analyze each step involved in the process to understand the different trade-offs that each one entails. Using COVA, we demonstrate that the whole pipeline can be effectively automated by leveraging large neural networks used as teachers whose predictions are used to train and specialize lightweight neural networks. Results show that COVA can automatically improve pre-trained models by an average of 21% mAP on the different scenes of the VIRAT dataset. (C) 2022 The Authors. Published by Elsevier B.V.
引用
收藏
页码:399 / 413
页数:15
相关论文
共 47 条
  • [1] Ali Muhammad, 2018, 2018 IEEE 2nd International Conference on Fog and Edge Computing (ICFEC), DOI 10.1109/CFEC.2018.8358733
  • [2] Real-Time Video Analytics: The Killer App for Edge Computing
    Ananthanarayanan, Ganesh
    Bahl, Paramvir
    Bodik, Peter
    Chintalapudi, Krishna
    Philipose, Matthai
    Ravindranath, Lenin
    Sinha, Sudipta
    [J]. COMPUTER, 2017, 50 (10) : 58 - 67
  • [3] [Anonymous], 2021, IMAGENET LARGE SCALE
  • [4] [Anonymous], 2021, Openvino documentation
  • [5] [Anonymous], 2021, COVA REPOSITORY
  • [6] [Anonymous], 2009, Statistical Analysis and Data Mining: The ASA Data Science Journal
  • [7] Recognition in Terra Incognita
    Beery, Sara
    Van Horn, Grant
    Perona, Pietro
    [J]. COMPUTER VISION - ECCV 2018, PT XVI, 2018, 11220 : 472 - 489
  • [8] Beery S, 2020, PROC CVPR IEEE, P13072, DOI 10.1109/CVPR42600.2020.01309
  • [9] Comparative study of background subtraction algorithms
    Benezeth, Yannick
    Jodoin, Pierre-Marc
    Emile, Bruno
    Laurent, Helene
    Rosenberger, Christophe
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2010, 19 (03)
  • [10] Benchmark Analysis of Representative Deep Neural Network Architectures
    Bianco, Simone
    Cadene, Remi
    Celona, Luigi
    Napoletano, Paolo
    [J]. IEEE ACCESS, 2018, 6 : 64270 - 64277