Efficient Video Captioning on Heterogeneous System Architectures

被引:3
|
作者
Huang, Horng-Ruey [1 ]
Hong, Ding-Yong [1 ]
Wu, Jan-Jan [1 ]
Liu, Pangfeng [2 ]
Hsu, Wei-Chung [2 ]
机构
[1] Acad Sinica, Inst Informat Sci, Taipei, Taiwan
[2] Natl Taiwan Univ, Dept Comp Sci & Informat Engn, Taipei, Taiwan
来源
2021 IEEE 35TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS) | 2021年
关键词
Video captioning; heterogeneous system architectures; model scheduling; dynamic programming; pipelining;
D O I
10.1109/IPDPS49936.2021.00112
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Video captioning is the core technology to drive the development of many important multidisciplinary applications, such as Al-assisted medical diagnosis, storytelling through videos, video question answering, lip-reading, just to name a few. Video captioning employs a hybrid CNN+RNN neural network model to translate video scenes into natural language descriptions. For deep learning inference, a typical approach is running both the CNN and the RNN on a GPU. Such a GPU-only approach often suffers long inference time due to underutilization of the computing power offered by the CPU+GPU heterogeneous system architecture, which is a common architecture in modern computers. This work is an early effort to tackle the performance issue of performing deep learning inference using a hybrid CNN+RNN model on a heterogeneous system with a CPU and a GPU. This is a challenging task because of (1) CNN and RNN exhibit very different computing behaviors. This raises the question of how to split the two models into computing tasks and properly assign the tasks to the CPU and the GPU to minimize the inference time for a video frame, and (2) Data dependency exists between the CNN and the RNN within a video frame, as well as between the adjacent RNNs across two video frames. These data dependencies prohibit full parallelization of the hybrid model. To solve these two problems, we propose two optimizations: a fine-grained scheduling scheme for mapping computation and devices within a video frame, and a pipeline scheduling scheme to exploit maximum parallelism between the execution ()I' the video frames. To facilitate our optimizations, we also develop an accurate regression-based cost model to predict the computation time of CNN/RNN operations and the communication time for moving data between CPU and GPU. Experimental results show that our optimization improves the performance of video captioning by up to 3.24x on the CPU+GPU system, compared with the GPU-only execution.
引用
收藏
页码:1035 / 1045
页数:11
相关论文
共 50 条
  • [31] Incorporating Textual Similarity in Video Captioning Schemes
    Gkountakos, Konstantinos
    Dimou, Anastasios
    Papadopoulos, Georgios Th.
    Daras, Petros
    2019 IEEE INTERNATIONAL CONFERENCE ON ENGINEERING, TECHNOLOGY AND INNOVATION (ICE/ITMC), 2019,
  • [32] Video captioning with global and local text attention
    Peng, Yuqing
    Wang, Chenxi
    Pei, Yixin
    Li, Yingjun
    VISUAL COMPUTER, 2022, 38 (12): : 4267 - 4278
  • [33] EDS: Exploring deeper into semantics for video captioning
    Lou, Yibo
    Zhang, Wenjie
    Song, Xiaoning
    Hua, Yang
    Wu, Xiao-Jun
    PATTERN RECOGNITION LETTERS, 2024, 186 : 133 - 140
  • [34] Global semantic enhancement network for video captioning
    Luo, Xuemei
    Luo, Xiaotong
    Wang, Di
    Liu, Jinhui
    Wan, Bo
    Zhao, Lin
    PATTERN RECOGNITION, 2024, 145
  • [35] Exploiting the local temporal information for video captioning
    Wei, Ran
    Mi, Li
    Hu, Yaosi
    Chen, Zhenzhong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 67 (67)
  • [36] Ada-SwinBERT: Adaptive Token Selection for Efficient Video Captioning with Online Self-Distillation
    Cao, Qianwen
    Huang, Heyan
    Liao, Minpeng
    Mao, Xianling
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 7 - 12
  • [37] An efficient deep learning-based video captioning framework using multi-modal features
    Varma, Soumya
    James, Dinesh Peter
    EXPERT SYSTEMS, 2021,
  • [38] A NOVEL ATTRIBUTE SELECTION MECHANISM FOR VIDEO CAPTIONING
    Xiao, Huanhou
    Shi, Jinglun
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 619 - 623
  • [39] Video captioning with global and local text attention
    Yuqing Peng
    Chenxi Wang
    Yixin Pei
    Yingjun Li
    The Visual Computer, 2022, 38 : 4267 - 4278
  • [40] Convolutional Reconstruction-to-Sequence for Video Captioning
    Wu, Aming
    Han, Yahong
    Yang, Yi
    Hu, Qinghua
    Wu, Fei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (11) : 4299 - 4308