CAPTURING LONG-RANGE DEPENDENCIES IN VIDEO CAPTIONING

被引:0
作者
Lee, Jaeyoung [1 ,3 ]
Lee, Yekang [1 ]
Seong, Sihyeon [1 ,3 ]
Kim, Kyungsu [2 ]
Kim, Sungjin [2 ]
Kim, Junmo [1 ,3 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
[2] Samsung Res, Seoul, South Korea
[3] Mofl Inc, Seoul, South Korea
来源
2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | 2019年
关键词
Video captioning; non-local block; long short-term memory; long-range dependency; video representation;
D O I
10.1109/icip.2019.8803143
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Most video captioning networks rely on recurrent models, including long short-term memory (LSTM). However, these recurrent models have a long-range dependency problem; thus, they are not sufficient for video encoding. To overcome this limitation, several studies investigated the relationships between objects or entities and have shown excellent performance in video classification and video captioning. In this study, we analyze a video captioning network with a non-local block in terms of temporal capacity. We introduce a video captioning method to capture long-range temporal dependencies with a non-local block. The proposed model independently uses local and non-local features. We evaluate our approach on a Microsoft Video Description Corpus (MSVD, YouTube2Text) dataset. The experimental results show that a non-local block applied along the temporal axis can solve the long-range dependency problem of the LSTM in video captioning datasets.
引用
收藏
页码:1880 / 1884
页数:5
相关论文
共 50 条
[21]   Video Captioning with Semantic Guiding [J].
Yuan, Jin ;
Tian, Chunna ;
Zhang, Xiangnan ;
Ding, Yuxuan ;
Wei, Wei .
2018 IEEE FOURTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM), 2018,
[22]   Refocused Attention: Long Short-Term Rewards Guided Video Captioning [J].
Dong, Jiarong ;
Gao, Ke ;
Chen, Xiaokai ;
Cao, Juan .
NEURAL PROCESSING LETTERS, 2020, 52 (02) :935-948
[23]   Refocused Attention: Long Short-Term Rewards Guided Video Captioning [J].
Jiarong Dong ;
Ke Gao ;
Xiaokai Chen ;
Juan Cao .
Neural Processing Letters, 2020, 52 :935-948
[24]   Long Short-Term Relation Transformer With Global Gating for Video Captioning [J].
Li, Liang ;
Gao, Xingyu ;
Deng, Jincan ;
Tu, Yunbin ;
Zha, Zheng-Jun ;
Huang, Qingming .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :2726-2738
[25]   Video Captioning Using Global-Local Representation [J].
Yan, Liqi ;
Ma, Siqi ;
Wang, Qifan ;
Chen, Yingjie ;
Zhang, Xiangyu ;
Savakis, Andreas ;
Liu, Dongfang .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) :6642-6656
[26]   Richer Semantic Visual and Language Representation for Video Captioning [J].
Tang, Pengjie ;
Wang, Hanli ;
Wang, Hanzhang ;
Xu, Kaisheng .
PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, :1871-1876
[27]   Video Captioning With Adaptive Attention and Mixed Loss Optimization [J].
Xiao, Huanhou ;
Shi, Jinglun .
IEEE ACCESS, 2019, 7 :135757-135769
[28]   Watch It Twice: Video Captioning with a Refocused Video Encoder [J].
Shi, Xiangxi ;
Cai, Jianfei ;
Joty, Shafiq ;
Gu, Jiuxiang .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :818-826
[29]   Incorporating the Graph Representation of Video and Text into Video Captioning [J].
Lu, Min ;
Li, Yuan .
2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, :396-401
[30]   Video captioning using boosted and parallel Long Short-Term Memory networks [J].
Nabati, Masoomeh ;
Behrad, Alireza .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2020, 190