Cross-Sentence Temporal and Semantic Relations in Video Activity Localisation

被引:48
作者
Huang, Jiabo [1 ]
Liu, Yang [2 ]
Gong, Shaogang [1 ]
Jin, Hailin [3 ]
机构
[1] Queen Mary Univ London, London, England
[2] Peking Univ, WICT, Beijing, Peoples R China
[3] Adobe Res, San Jose, CA USA
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
LANGUAGE;
D O I
10.1109/ICCV48922.2021.00711
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video activity localisation has recently attained increasing attention due to its practical values in automatically localising the most salient visual segments corresponding to their language descriptions (sentences) from untrimmed and unstructured videos. For supervised model training, a temporal annotation of both the start and end time index of each video segment for a sentence (a video moment) must be given. This is not only very expensive but also sensitive to ambiguity and subjective annotation bias, a much harder task than image labelling. In this work, we develop a more accurate weakly-supervised solution by introducing Cross-Sentence Relations Mining (CRM) in video moment proposal generation and matching when only a paragraph description of activities without per-sentence temporal annotation is available. Specifically, we explore two cross-sentence relational constraints: (1) Temporal ordering and (2) semantic consistency among sentences in a paragraph description of video activities. Existing weakly-supervised techniques only consider within-sentence video segment correlations in training without considering cross-sentence paragraph context. This can mislead due to ambiguous expressions of individual sentences with visually indiscriminate video moment proposals in isolation. Experiments on two publicly available activity localisation datasets show the advantages of our approach over the state-of-the-art weakly supervised methods, especially so when the video activity descriptions become more complex.
引用
收藏
页码:7179 / 7188
页数:10
相关论文
共 39 条
[1]  
[Anonymous], 2009, 10 ANN C INT SPEECH
[2]  
[Anonymous], 2020, P ACM INT C MULT
[3]  
Chen JY, 2019, AAAI CONF ARTIF INTE, P8175
[4]  
Chen JY, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P162
[5]  
Chen L, 2020, AAAI CONF ARTIF INTE, V34, P10551
[6]  
Chen SX, 2019, AAAI CONF ARTIF INTE, P8199
[7]  
Devlin J., 2018, PRE
[8]  
Duan X., 2018, ADV NEURAL INFORM PR, P3059
[9]  
Gao Jiyang., 2017, Cascaded boundary regression for temporal action detection
[10]  
Gao Mingfei, 2019, P EMP METH NAT LANG