Online Video Super-resolution using Information Replenishing Unidirectional Recurrent Model

被引:4
作者
Baniya, Arbind Agrahari [1 ]
Lee, Tsz-Kwan [1 ]
Eklund, Peter W. [1 ]
Aryal, Sunil [1 ]
Robles-Kelly, Antonio [1 ]
机构
[1] Deakin Univ, Sch Informat Technol, Geelong, Vic, Australia
关键词
Video super-resolution; Recurrent network; deep learning; Advanced optimisation; Multimedia application; NETWORK;
D O I
10.1016/j.neucom.2023.126355
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recurrent Neural Networks (RNN) are widespread for Video Super-Resolution (VSR) because of their proven ability to learn spatiotemporal inter-dependencies across the temporal dimension. Despite RNN's ability to propagate memory across longer sequences of frames, vanishing gradient and error accumulation remain major obstacles to unidirectional RNNs in VSR. Several bi-directional recurrent models are suggested in the literature to alleviate this issue; however, these models are only applicable to offline use cases due to heavy demands for computational resources and the number of frames required per input. This paper proposes a novel unidirectional recurrent model for VSR, namely "Replenished Recurrency with Dual-Duct" (R2D2), that can be used in an online application setting. R2D2 incorporates a recurrent architecture with a sliding-window-based local alignment resulting in a recurrent hybrid architecture. It also uses a dual-duct residual network for concurrent and mutual refinement of local features along with global memory for full utilisation of the information available at each timestamp. With novel modelling and sophisticated optimisation, R2D2 demonstrates competitive performance and efficiency despite the lack of information available at each time-stamp compared to its offline (bidirectional) counterparts. Ablation analysis confirms the additive benefits of the proposed subcomponents of R2D2 over baseline RNN models.The PyTorch-based code for the R2D2 model will be released at R2D2 GitRepo.(c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:10
相关论文
共 48 条
[1]   Robust online video super-resolution using an efficient alternating projections scheme [J].
Borsoi, Ricardo Augusto .
SIGNAL PROCESSING, 2020, 172 (172)
[2]  
Chan K.C., 2022, P IEEE C COMP VIS PA, P5972
[3]   BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond [J].
Chan, Kelvin C. K. ;
Wang, Xintao ;
Yu, Ke ;
Dong, Chao ;
Loy, Chen Change .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :4945-4954
[4]   Classification-based video super-resolution using artificial neural networks [J].
Cheng, Ming-Hui ;
Hwang, Kao-Shing ;
Jeng, Jyh-Horng ;
Lin, Nai-Wei .
SIGNAL PROCESSING, 2013, 93 (09) :2612-2625
[5]   Stable Long-Term Recurrent Video Super-Resolution [J].
Chiche, Benjamin Naoto ;
Woiselle, Arnaud ;
Frontera-Pons, Joana ;
Starck, Jean-Luc .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :827-836
[6]   Deformable Convolutional Networks [J].
Dai, Jifeng ;
Qi, Haozhi ;
Xiong, Yuwen ;
Li, Yi ;
Zhang, Guodong ;
Hu, Han ;
Wei, Yichen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :764-773
[7]   DENSE-CONNECTED RESIDUAL NETWORK FOR VIDEO SUPER-RESOLUTION [J].
Du, Xiaoting ;
Zhou, Yuan ;
Chen, Yanfang ;
Zhang, Yeda ;
Yang, Jianxing ;
Jin, Dou .
2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, :592-597
[8]   High-resolution optical flow and frame-recurrent network for video super-resolution and deblurring [J].
Fang, Ning ;
Zhan, Zongqian .
NEUROCOMPUTING, 2022, 489 :128-138
[9]   Efficient Video Super-Resolution through Recurrent Latent Space Propagation [J].
Fuoli, Dario ;
Gu, Shuhang ;
Timofte, Radu .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, :3476-3485
[10]  
Gotmare A, 2018, Arxiv, DOI arXiv:1810.13243