Going Deeper with Semantics: Video Activity Interpretation using Semantic Contextualization

被引:4
|
作者
Aakur, Sathyanarayanan [1 ]
de Souza, Fillipe D. M. [1 ]
Sarkar, Sudeep [1 ]
机构
[1] Univ S Florida, Tampa, FL 33620 USA
来源
2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2019年
关键词
RECOGNITION; HISTOGRAMS;
D O I
10.1109/WACV.2019.00026
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A deeper understanding of video activities extends beyond recognition of underlying concepts such as actions and objects: constructing deep semantic representations requires reasoning about the semantic relationships among these concepts, often beyond what is directly observed in the data. To this end, we propose an energy minimization framework that leverages large-scale commonsense knowledge bases, such as ConceptNet, to provide contextual cues to establish semantic relationships among entities directly hypothesized from video signal. We mathematically express this using the language of Grenander's canonical pattern generator theory. We show that the use of prior encoded commonsense knowledge alleviate the need for large annotated training datasets and help tackle imbalance in training through prior knowledge. Using three different publicly available datasets - Charades, Microsoft Visual Description Corpus and Breakfast Actions datasets, we show that the proposed model can generate video interpretations whose quality is better than those reported by state-of-the-art approaches, which have substantial training needs. Through extensive experiments, we show that the use of commonsense knowledge from ConceptNet allows the proposed approach to handle various challenges such as training data imbalance, weak features, complex semantic relationships and visual scenes.
引用
收藏
页码:190 / 199
页数:10
相关论文
empty
未找到相关数据