CAPTURING LONG-RANGE DEPENDENCIES IN VIDEO CAPTIONING

被引:0
作者
Lee, Jaeyoung [1 ,3 ]
Lee, Yekang [1 ]
Seong, Sihyeon [1 ,3 ]
Kim, Kyungsu [2 ]
Kim, Sungjin [2 ]
Kim, Junmo [1 ,3 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
[2] Samsung Res, Seoul, South Korea
[3] Mofl Inc, Seoul, South Korea
来源
2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | 2019年
关键词
Video captioning; non-local block; long short-term memory; long-range dependency; video representation;
D O I
10.1109/icip.2019.8803143
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Most video captioning networks rely on recurrent models, including long short-term memory (LSTM). However, these recurrent models have a long-range dependency problem; thus, they are not sufficient for video encoding. To overcome this limitation, several studies investigated the relationships between objects or entities and have shown excellent performance in video classification and video captioning. In this study, we analyze a video captioning network with a non-local block in terms of temporal capacity. We introduce a video captioning method to capture long-range temporal dependencies with a non-local block. The proposed model independently uses local and non-local features. We evaluate our approach on a Microsoft Video Description Corpus (MSVD, YouTube2Text) dataset. The experimental results show that a non-local block applied along the temporal axis can solve the long-range dependency problem of the LSTM in video captioning datasets.
引用
收藏
页码:1880 / 1884
页数:5
相关论文
共 50 条
[41]   Deep multimodal embedding for video captioning [J].
Lee, Jin Young .
MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (22) :31793-31805
[42]   A Grey Relational Analysis based Evaluation Metric for Image Captioning and Video Captioning [J].
Ma, Miao ;
Wang, Bolong .
PROCEEDINGS OF 2017 IEEE INTERNATIONAL CONFERENCE ON GREY SYSTEMS AND INTELLIGENT SERVICES (GSIS), 2017, :76-81
[43]   Traffic Scenario Understanding and Video Captioning via Guidance Attention Captioning Network [J].
Liu, Chunsheng ;
Zhang, Xiao ;
Chang, Faliang ;
Li, Shuang ;
Hao, Penghui ;
Lu, Yansha ;
Wang, Yinhai .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (05) :3615-3627
[44]   Learning Video-Text Aligned Representations for Video Captioning [J].
Shi, Yaya ;
Xu, Haiyang ;
Yuan, Chunfeng ;
Li, Bing ;
Hu, Weiming ;
Zha, Zheng-Jun .
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (02)
[45]   Improving distinctiveness in video captioning with text-video similarity [J].
Velda, Vania ;
Immanuel, Steve Andreas ;
Hendria, Willy Fitra ;
Jeong, Cheol .
IMAGE AND VISION COMPUTING, 2023, 136
[46]   Quality Enhancement Based Video Captioning in Video Communication Systems [J].
Le, The Van ;
Lee, Jin Young .
IEEE ACCESS, 2024, 12 :40989-40999
[47]   Automatic Video Captioning via Multi-channel Sequential Encoding [J].
Zhang, Chenyang ;
Tian, Yingli .
COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 :146-161
[48]   Fine-Grained Length Controllable Video Captioning With Ordinal Embeddings [J].
Nitta, Tomoya ;
Fukuzawa, Takumi ;
Tamaki, Toru .
IEEE ACCESS, 2024, 12 :189667-189688
[49]   Multiple Videos Captioning Model for Video Storytelling [J].
Han, Seung-Ho ;
Go, Bo-Won ;
Choi, Ho-Jin .
2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2019, :355-358
[50]   Efficient Video Captioning on Heterogeneous System Architectures [J].
Huang, Horng-Ruey ;
Hong, Ding-Yong ;
Wu, Jan-Jan ;
Liu, Pangfeng ;
Hsu, Wei-Chung .
2021 IEEE 35TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2021, :1035-1045