Fast Surveillance Video Retrieval Model Based on Tolerant Training and Privacy Protection

被引:0
作者
Qin H. [1 ,2 ]
Wang P.-H. [1 ,2 ]
Zhang R.-F. [1 ,2 ]
Qin Z.-Y. [3 ]
机构
[1] School of Cyber Science and Engineering, Xi’an Jiaotong University, Xi’an
[2] Ministry of Education Key Laboratory for Intelligent Networks and Network Security, Xi’an Jiaotong University, Xi’an
[3] School of Software Engineering, Xi’an Jiaotong University, Xi’an
来源
Ruan Jian Xue Bao/Journal of Software | 2023年 / 34卷 / 03期
关键词
curriculum learning; knowledge distillation; privacy protection; video retrieval;
D O I
10.13328/j.cnki.jos.006790
中图分类号
学科分类号
摘要
Surveillance video keyframe retrieval and attribute search have many application scenarios in traffic, security, education and other fields. The application of deep learning model to process massive video data to a certain extent alleviates manpower consumption, but it is characterized by privacy disclosure, large consumption of computing resources and long time. Based on the above scenarios, this study proposes a safe and fast video retrieval model for mass surveillance video. In particular, according to the characteristics of large computing power in the cloud and small scale of computing power in the surveillance camera, heavyweight model is deployed in the cloud, and the proposed tolerance training strategy is used for customized knowledge distillation, the distilled lightweight model is then deployed inside a surveillance camera, at the same time using local encryption algorithm to encrypt sensitive to image part, combined with cloud TEE technology and user authorization mechanism, privacy protection can be achieved with very low resource consumption. By reasonably controlling the “tolerance” of distillation strategy, the time-consuming of camera video input stage and cloud retrieval stage can be balanced, and extremely low retrieval delay is ensured on the premise of extremely high accuracy. Compared with traditional retrieval methods, the proposed model has the characteristics of security, efficiency, scalability and low latency. Experimental results show that the proposed model provides 9×-133×acceleration compared with traditional retrieval methods on multiple open data sets. © 2023 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:1292 / 1309
页数:17
相关论文
共 52 条
  • [21] Jiang L, Meng D, Mitamura T, Hauptmann AG., Easy samples first: Self-paced reranking for zero-example multimedia search, Proc. of the ACM Conf. on Multimedia (MM 2014), pp. 547-556, (2014)
  • [22] Platanios EA, Stretcu O, Neubig G, Poczos B, Mitchell TM., Competence-based curriculum learning for neural machine translation, Proc. of the Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2019), 1, pp. 1162-1172, (2019)
  • [23] Tay Y, Wang S, Tuan LA, Fu J, Phan MC, Yuan X, Rao J, Hui SC, Zhang A., Simple and effective curriculum pointer-generator networks for reading comprehension over long narratives, Proc. of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), pp. 4922-4931, (2020)
  • [24] El-Bouri R, Eyre D, Watkinson P, Zhu T, Clifton DA., Student-teacher curriculum learning via reinforcement learning: Predicting hospital inpatient admission location, Proc. of the 37th Int’l Conf. on Machine Learning (ICML 2020), pp. 2848-2857, (2020)
  • [25] Florensa C, Held D, Wulfmeier M, Zhang M, Abbeel P., Reverse curriculum generation for reinforcement learning, Proc. of the Conf. on Robot Learning, pp. 482-495, (2017)
  • [26] Narvekar S, Sinapov J, Stone P., Autonomous task sequencing for customized curriculum design in reinforcement learning, Proc. of the Int’l Joint Conf. on Artificial Intelligence, pp. 2536-2542, (2017)
  • [27] Qu M, Tang J, Han J., Curriculum learning for heterogeneous star network embedding via deep reinforcement learning, Proc. of the 11th ACM Int’l Conf. on Web Search and Data Mining (WSDM 2018), pp. 468-476, (2018)
  • [28] Gong C, Yang J, Tao D., Multi-modal curriculum learning over graphs, ACM Trans. on Intelligent Systems and Technology, 10, 4, pp. 1-25, (2019)
  • [29] Guo Y, Chen Y, Zheng Y, Zhao P, Chen J, Huang J, Tan M., Breaking the curse of space explosion: Towards efficient NAS with curriculum search, Proc. of the Int’l Conf. on Machine Learning, pp. 3822-3831, (2020)
  • [30] Hinton G, Vinyals O, Dean J, Et al., Distilling the knowledge in a neural network, (2015)