Going Deeper with Semantics: Video Activity Interpretation using Semantic Contextualization

被引:4
作者
Aakur, Sathyanarayanan [1 ]
de Souza, Fillipe D. M. [1 ]
Sarkar, Sudeep [1 ]
机构
[1] Univ S Florida, Tampa, FL 33620 USA
来源
2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2019年
关键词
RECOGNITION; HISTOGRAMS;
D O I
10.1109/WACV.2019.00026
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A deeper understanding of video activities extends beyond recognition of underlying concepts such as actions and objects: constructing deep semantic representations requires reasoning about the semantic relationships among these concepts, often beyond what is directly observed in the data. To this end, we propose an energy minimization framework that leverages large-scale commonsense knowledge bases, such as ConceptNet, to provide contextual cues to establish semantic relationships among entities directly hypothesized from video signal. We mathematically express this using the language of Grenander's canonical pattern generator theory. We show that the use of prior encoded commonsense knowledge alleviate the need for large annotated training datasets and help tackle imbalance in training through prior knowledge. Using three different publicly available datasets - Charades, Microsoft Visual Description Corpus and Breakfast Actions datasets, we show that the proposed model can generate video interpretations whose quality is better than those reported by state-of-the-art approaches, which have substantial training needs. Through extensive experiments, we show that the use of commonsense knowledge from ConceptNet allows the proposed approach to handle various challenges such as training data imbalance, weak features, complex semantic relationships and visual scenes.
引用
收藏
页码:190 / 199
页数:10
相关论文
共 40 条
[1]  
Aditya Somak, 2015, ARXIV151103292
[2]   Monte Carlo Tree Search for Scheduling Activity Recognition [J].
Amer, Mohamed R. ;
Todorovic, Sinisa ;
Fern, Alan ;
Zhu, Song-Chun .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :1353-1360
[3]   VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[4]   BidirectionalLong-Short Term Memory for Video Description [J].
Bin, Yi ;
Yang, Yang ;
Shen, Fumin ;
Xu, Xing ;
Shen, Heng Tao .
MM'16: PROCEEDINGS OF THE 2016 ACM MULTIMEDIA CONFERENCE, 2016, :436-440
[5]  
Chaudhry R, 2009, PROC CVPR IEEE, P1932, DOI 10.1109/CVPRW.2009.5206821
[6]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[7]   A Thousand Frames in Just a Few Words: Lingual Description of Videos through Latent Topics and Sparse Object Stitching [J].
Das, Pradipto ;
Xu, Chenliang ;
Doell, Richard F. ;
Corso, Jason J. .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :2634-2641
[8]  
de Souza F. D., 2016, INT C PATT REC ICPR
[9]   Early Embedding and Late Reranking for Video Captioning [J].
Dong, Jianfeng ;
Li, Xirong ;
Lan, Weiyu ;
Huo, Yujia ;
Snoek, Cees G. M. .
MM'16: PROCEEDINGS OF THE 2016 ACM MULTIMEDIA CONFERENCE, 2016, :1082-1086
[10]  
Grenander U., 1996, Elements of Pattern Theory