Video spatio-temporal generative adversarial network for local action generation

被引:1
作者
Liu, Xuejun [1 ]
Guo, Jiacheng [1 ]
Cui, Zhongji [1 ]
Liu, Ling [2 ]
Yan, Yong [1 ]
Sha, Yun [1 ]
机构
[1] Beijing Inst Petrochem Technol, Coll Informat Engn, Beijing, Peoples R China
[2] Beijing Inst Graph Commun, Coll Art & Design, Beijing, Peoples R China
关键词
video generation; deep learning; generative adversarial networks; two-stage model;
D O I
10.1117/1.JEI.32.5.053003
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Generating action videos in future scenes based on static images can make computer vision systems to be better applied for video understanding and intelligent decision-making. However, current models pay more attention to the motion trend of the generated objects, and the processing effect on local details is not ideal. The local features of the generated video will have the problem of blurred frames and incoherent motion. This paper proposes a two-stage model, video spatio-temporal generative adversarial network (VSTGAN), which consists of two GAN networks, such as temporal network and spatial network (S-net). The model fully combines the advantages of CNNs, recurrent neural networks (RNNs), and GANs to decompose the complex spatiotemporal generation problem into temporal and spatial dimensions. Therefore, VSTGAN can focus on local features from the above dimensions respectively. In the temporal dimension, we propose an RNN unit, the convolutional attention unit (ConvAU), which uses the convolutional attention module to dynamically generate weights to update the hidden state. Thus, T-net uses the ConvAU to generate local dynamics. In the spatial dimension, S-net uses CNNs and attention modules to perform resolution reconstruction of the generated local dynamics for video generation. We build two small-sample datasets and validate our approach on these two new datasets and the KTH public dataset. The results show that our approach can effectively generate local details in future action videos and that the model performance on small-sample datasets is competitive with the state-of-the-art in video generation.
引用
收藏
页数:19
相关论文
共 46 条
[1]   Optimizing Spatiotemporal Feature Learning in 3D Convolutional Neural Networks With Pooling Blocks [J].
Agyeman, Rockson ;
Rafiq, Muhammad ;
Shin, Hyun Kwang ;
Rinner, Bernhard ;
Choi, Gyu Sang .
IEEE ACCESS, 2021, 9 :70797-70805
[2]  
Aigner Sandra, 2018, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, DOI 10.5194/isprs-archives-XLII-2-W16-3-2019
[3]  
Dai Z, 2021, ADV NEUR IN, V34
[4]   Learning Spatiotemporal Features with 3D Convolutional Networks [J].
Du Tran ;
Bourdev, Lubomir ;
Fergus, Rob ;
Torresani, Lorenzo ;
Paluri, Manohar .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :4489-4497
[5]  
Franceschi Jean-Yves, 2020, INT C MACHINE LEARNI, P3233
[6]   SimVP: Simpler yet Better Video Prediction [J].
Gao, Zhangyang ;
Tan, Cheng ;
Wu, Lirong ;
Li, Stan Z. .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :3160-3170
[7]  
Gatys LA, 2015, ADV NEUR IN, V28
[8]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[9]   RV-GAN: Recurrent GAN for Unconditional Video Generation [J].
Gupta, Sonam ;
Keshari, Arti ;
Das, Sukhendu .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, :2023-2032
[10]   Video Satellite Imagery Super-Resolution via Model-Based Deep Neural Networks [J].
He, Zhi ;
Li, Xiaofang ;
Qu, Rongning .
REMOTE SENSING, 2022, 14 (03)