Fine-grained action segmentation using the semi-supervised action GAN

被引:30
作者
Gammulle, Harshala [1 ]
Denman, Simon [1 ]
Sridharan, Sridha [1 ]
Fookes, Clinton [1 ]
机构
[1] Queensland Univ Technol, SAIVT, Image & Video Res Lab, Brisbane, Qld, Australia
基金
澳大利亚研究理事会;
关键词
Human action segmentation; Generative adversarial networks; Context modelling; ACTION RECOGNITION; FEATURES;
D O I
10.1016/j.patcog.2019.107039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we address the problem of continuous fine-grained action segmentation, in which multiple actions are present in an unsegmented video stream. The challenge for this task lies in the need to represent the hierarchical nature of the actions and to detect the transitions between actions, allowing us to localise the actions within the video effectively. We propose a novel recurrent semi-supervised Generative Adversarial Network (GAN) model for continuous fine-grained human action segmentation. Temporal context information is captured via a novel Gated Context Extractor (GCE) module, composed of gated attention units, that directs the queued context information through the generator model, for enhanced action segmentation. The GAN is made to learn features in a semi-supervised manner, enabling the model to perform action classification jointly with the standard, unsupervised, GAN learning procedure. We perform extensive evaluations on different architectural variants to demonstrate the importance of the proposed network architecture, and show that it is capable of outperforming current state-of-the-art on three challenging datasets: 50 Salads, MERL Shopping and Georgia Tech Egocentric Activities dataset. (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 53 条
[1]  
Acevedo S, 2017, BOL GRUPO ESP CARBON, P18
[2]   DASHing Towards Hollywood [J].
Ahsan, Saba ;
McQuistin, Stephen ;
Perkins, Colin ;
Ott, Joerg .
PROCEEDINGS OF THE 9TH ACM MULTIMEDIA SYSTEMS CONFERENCE (MMSYS'18), 2018, :1-12
[3]  
[Anonymous], 2016, arXiv preprint arXiv:1605.02688
[4]  
[Anonymous], 2013, P ACM INT JOINT C PE
[5]  
Aubakirova Malika, 2016, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, P2035, DOI [10.18653/v1/D16-1216, DOI 10.18653/V1/D16-1216]
[6]  
Bora A, 2017, PR MACH LEARN RES, V70
[7]  
Chen M, 2011, INT CONF CLOUD COMPU, P316, DOI 10.1109/CCIS.2011.6045082
[8]  
Dai Z., 2017, Advances in Neural Information Processing Systems, P5131
[9]  
Delaitre V., 2010, BMVC, DOI DOI 10.5244/C.24.97
[10]  
Delaitre Vincent, 2011, NIPS