Large-Scale Training Framework for Video Annotation

被引:0
作者
Hwang, Seong Jae [1 ,2 ]
Lee, Joonseok [2 ]
Varadarajan, Balakrishnan [2 ]
Gordon, Ariel [2 ]
Xu, Zheng [2 ]
Natsev, Apostol [2 ]
机构
[1] Univ Wisconsin, Madison, WI 53706 USA
[2] Google Res, Mountain View, CA USA
来源
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING | 2019年
关键词
Scalability; Distributed framework; Video annotation; MapReduce;
D O I
10.1145/3292500.3330653
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video is one of the richest sources of information available online but extracting deep insights from video content at internet scale is still an open problem, both in terms of depth and breadth of understanding, as well as scale. Over the last few years, the field of video understanding has made great strides due to the availability of large-scale video datasets and core advances in image, audio, and video modeling architectures. However, the state-of-the-art architectures on small scale datasets are frequently impractical to deploy at internet scale, both in terms of the ability to train such deep networks on hundreds of millions of videos, and to deploy them for inference on billions of videos. In this paper, we present a MapReduce-based training framework, which exploits both data parallelism and model parallelism to scale training of complex video models. The proposed framework uses alternating optimization and full-batch fine-tuning, and supports large Mixture-of-Experts classifiers with hundreds of thousands of mixtures, which enables a trade-off between model depth and breadth, and the ability to shift model capacity between shared (generalization) layers and per-class (specialization) layers. We demonstrate that the proposed framework is able to reach state-of-the-art performance on the largest public video datasets, YouTube-8M and Sports-1M, and can scale to 100 times larger datasets.
引用
收藏
页码:2394 / 2402
页数:9
相关论文
共 48 条
  • [1] Abu-El-Haija S., 2016, ARXIV160908675
  • [2] Aliev V., 2018, P 2 WORKSH YOUTUBE 8
  • [3] [Anonymous], 2017, IEEE T PATTERN ANAL
  • [4] [Anonymous], 2015, ADV NEURAL INFORM PR
  • [5] [Anonymous], 2018, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2017.2712608
  • [6] [Anonymous], 2008, P IEEE C COMP VIS PA
  • [7] [Anonymous], 2018, P 2 WORKSH YOUTUBE 8
  • [8] [Anonymous], 2017, P IEEE INT C AC SPEE
  • [9] [Anonymous], P CVPR 17 WORKSH YOU
  • [10] [Anonymous], P IEEE INT C NEUR NE