Multimodal Context Fusion Based Dense Video Captioning Algorithm

被引:0
作者
Li, Meiqi [1 ]
Zhou, Ziwei [1 ]
机构
[1] Univ Sci & Technol Liaoning, Sch Comp Sci & Software Engn, Anshan 114051, Peoples R China
关键词
Index Terms; Dense Video Description; Transformer; Mult-imodal feature fusion; Event context; SCN Decoder;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The core task of dense video description is to identify all events occurring in an unedited video and generate textual descriptions for these events. This has applications in fields such as assisting visually impaired individuals, generating news headlines, and enhancing human-computer interaction. However, existing dense video description models often overlook the role of textual information (e.g., road signs, subtitles) in video comprehension, as well as the contextual relationships between events, which are crucial for accurate description generation. To address these issues, this paper proposes a multimodal dense video description approach based on event-context fusion. The model utilizes a C3D network to extract visual features from the video and integrates OCR technology to extract textual information, thereby enhancing the semantic understanding of the video content. During feature extraction, sliding window and temporal alignment techniques are applied to ensure the temporal consistency of visual, audio, and textual features. A multimodal context fusion encoder is used to capture the temporal and semantic relationships between events and to deeply integrate multimodal features. The SCN decoder then generates descriptions word by word, improving both semantic consistency and fluency. The model is trained and evaluated on the MSVD and MSR-VTT datasets, and its performance is compared with several popular models. Experimental results show significant improvements in CIDEr evaluation scores, achieving 98.8 and 53.7 on the two datasets, respectively. Additionally, ablation studies are conducted to comprehensively assess the effectiveness and stability of each component of the model.
引用
收藏
页码:1061 / 1072
页数:12
相关论文
共 35 条
  • [1] Alamri H., 2022, arXiv, DOI [arXiv:2210.14512, 10.48550/arXiv.2210.14512, DOI 10.48550/ARXIV.2210.14512]
  • [2] Bertasius G., 2021, arXiv, DOI [arXiv:2102.05095, 10.48550/arXiv.2102.05095, DOI 10.48550/ARXIV.2102.05095]
  • [3] Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
    Carreira, Joao
    Zisserman, Andrew
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4724 - 4733
  • [4] Denkowski M., 2014, P 9 WORKSH STAT MACH, P376, DOI DOI 10.3115/V1/W14-3348
  • [5] Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
  • [6] Semantic Compositional Networks for Visual Captioning
    Gan, Zhe
    Gan, Chuang
    He, Xiaodong
    Pu, Yunchen
    Tran, Kenneth
    Gao, Jianfeng
    Carin, Lawrence
    Deng, Li
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1141 - 1150
  • [7] Hershey S, 2017, INT CONF ACOUST SPEE, P131, DOI 10.1109/ICASSP.2017.7952132
  • [8] Huang L, 2019, Arxiv, DOI [arXiv:1908.06954, 10.48550/arXiv.1908.06954, DOI 10.48550/ARXIV.1908.06954]
  • [9] Multi-modal Dense Video Captioning
    Iashin, Vladimir
    Rahtu, Esa
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 4117 - 4126
  • [10] DenseCap: Fully Convolutional Localization Networks for Dense Captioning
    Johnson, Justin
    Karpathy, Andrej
    Fei-Fei, Li
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 4565 - 4574