Coupled Generative Adversarial Network for Continuous Fine-grained Action Segmentation

被引:17
作者
Gammulle, Harshala [1 ]
Fernando, Tharindu [1 ]
Denman, Simon [1 ]
Sridharan, Sridha [1 ]
Fookes, Clinton [1 ]
机构
[1] Queensland Univ Technol, SAIVT, Image & Video Res Lab, Brisbane, Qld, Australia
来源
2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2019年
基金
澳大利亚研究理事会;
关键词
D O I
10.1109/WACV.2019.00027
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We propose a novel conditional GAN (cGAN) model for continuous fine-grained human action segmentation, that utilises multi-modal data and learned scene context information. The proposed approach utilises two GANs: termed Action GAN and Auxiliary GAN, where the Action GAN is trained to operate over the current RGB frame while the Auxiliary GAN utilises supplementary information such as depth or optical flow. The goal of both GANs is to generate similar 'action codes', a vector representation of the current action. To facilitate this process a context extractor that incorporates data and recent outputs from both modes is used to extract context information to aid recognition. The result is a recurrent GAN architecture which learns a task specific loss function from multiple feature modalities. Extensive evaluations on variants of the proposed model to show the importance of utilising different information streams such as context and auxiliary information in the proposed network; and show that our model is capable of outperforming state-of-the-art methods for three widely used datasets: 50 Salads, MERL Shopping and Georgia Tech Egocentric Activities, comprising both static and dynamic camera settings.(1)
引用
收藏
页码:200 / 209
页数:10
相关论文
共 55 条
  • [1] [Anonymous], RECOGNIZING HUMAN AC
  • [2] [Anonymous], 2013, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2012.59
  • [3] [Anonymous], 2015, ARXIV PREPRINT ARXIV
  • [4] [Anonymous], C NEUR INF PROC SYST
  • [5] [Anonymous], 2016, P 4 INT C LEARN REPR
  • [6] [Anonymous], PATTERN RECOGNITION
  • [7] [Anonymous], NEURAL NETWORKS
  • [8] [Anonymous], 2014, IEEE C COMP VIS PATT
  • [9] [Anonymous], APPL COMP VIS WACV 2
  • [10] [Anonymous], 2016, IEEE C COMP VIS PATT