Online Video Super-resolution using Information Replenishing Unidirectional Recurrent Model

被引:4
作者
Baniya, Arbind Agrahari [1 ]
Lee, Tsz-Kwan [1 ]
Eklund, Peter W. [1 ]
Aryal, Sunil [1 ]
Robles-Kelly, Antonio [1 ]
机构
[1] Deakin Univ, Sch Informat Technol, Geelong, Vic, Australia
关键词
Video super-resolution; Recurrent network; deep learning; Advanced optimisation; Multimedia application; NETWORK;
D O I
10.1016/j.neucom.2023.126355
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recurrent Neural Networks (RNN) are widespread for Video Super-Resolution (VSR) because of their proven ability to learn spatiotemporal inter-dependencies across the temporal dimension. Despite RNN's ability to propagate memory across longer sequences of frames, vanishing gradient and error accumulation remain major obstacles to unidirectional RNNs in VSR. Several bi-directional recurrent models are suggested in the literature to alleviate this issue; however, these models are only applicable to offline use cases due to heavy demands for computational resources and the number of frames required per input. This paper proposes a novel unidirectional recurrent model for VSR, namely "Replenished Recurrency with Dual-Duct" (R2D2), that can be used in an online application setting. R2D2 incorporates a recurrent architecture with a sliding-window-based local alignment resulting in a recurrent hybrid architecture. It also uses a dual-duct residual network for concurrent and mutual refinement of local features along with global memory for full utilisation of the information available at each timestamp. With novel modelling and sophisticated optimisation, R2D2 demonstrates competitive performance and efficiency despite the lack of information available at each time-stamp compared to its offline (bidirectional) counterparts. Ablation analysis confirms the additive benefits of the proposed subcomponents of R2D2 over baseline RNN models.The PyTorch-based code for the R2D2 model will be released at R2D2 GitRepo.(c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:10
相关论文
共 48 条
[31]   Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution [J].
Lucas, Alice ;
Lopez-Tapia, Santiago ;
Molina, Rafael ;
Katsaggelos, Aggelos K. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (07) :3312-3327
[32]  
Nair V., 2010, P 27 INT C INT C MAC, P807
[33]   Mixed-dense connection networks for image and video super-resolution [J].
Purohit, Kuldeep ;
Mandal, Srimanta ;
Rajagopalan, A. N. .
NEUROCOMPUTING, 2020, 398 :360-376
[34]   Optical Flow Estimation using a Spatial Pyramid Network [J].
Ranjan, Anurag ;
Black, Michael J. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2720-2729
[35]   Frame-Recurrent Video Super-Resolution [J].
Sajjadi, Mehdi S. M. ;
Vemulapalli, Raviteja ;
Brown, Matthew .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6626-6634
[36]   Traditional and modern strategies for optical flow: an investigation [J].
Shah, Syed Tafseer Haider ;
Xiang Xuezhi .
SN APPLIED SCIENCES, 2021, 3 (03)
[37]   Attention -guided dual spatial -temporal non -local network for video super -resolution [J].
Sun, Wei ;
Zhang, Yanning .
NEUROCOMPUTING, 2020, 406 :24-33
[38]   Detail-revealing Deep Video Super-resolution [J].
Tao, Xin ;
Gao, Hongyun ;
Liao, Renjie ;
Wang, Jue ;
Jia, Jiaya .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4482-4490
[39]  
Thompson B., 2018, Freezing Subnetworks to Analyze Domain Adaptation in Neural Machine Translation
[40]   Deformable Non-Local Network for Video Super-Resolution [J].
Wang, Hua ;
Su, Dewei ;
Liu, Chuangchuang ;
Jin, Longcun ;
Sun, Xianfang ;
Peng, Xinyi .
IEEE ACCESS, 2019, 7 :177734-177744