Bidirectional scale-aware upsampling network for arbitrary-scale video super-resolution

被引:1
作者
Luo, Laigan [1 ]
Yi, Benshun [1 ]
Wang, Zhongyuan [2 ]
He, Zheng [2 ]
Zhu, Chao [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
[2] Wuhan Univ, Natl Engn Res Ctr Multimedia Software, Sch Comp Sci, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Video super -resolution; Arbitrary -scale factor; Bidirectional module; Upsampling module; IMAGE SUPERRESOLUTION;
D O I
10.1016/j.imavis.2024.105116
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The performance of video super-resolution (VSR) has significantly improved. However, the current methods only focus on a single scale factor, treating the VSR of different scale factors independently and disregarding video super-resolution of arbitrary-scale factors. To address this issue, we propose a model, the Bidirectional ScaleAware Upsampling Network for Arbitrary-Scale Video Super-Resolution, which eliminates the need for multiple models for various scale factors. We design a Bidirectional Scale-Aware Upsampling module in the proposed model, consisting of a Bidirectional Scale-Aware Module (BSAM) and a Spatial Pyramid Upsampling section. The BSAM extracts feature for various scale factors and allows feature information of different scales to interact bidirectionally. Additionally, we propose a Spatial Pyramid Loss that optimizes the network based on upsampling and maps the results of different scales to a unified spatial set to find the arbitrary-scale factor's loss. Along with this, we introduce an Explicit Feature Pyramid module, which uses Spatial Pyramid Upsampling to learn arbitrary-scale factor details explicitly. Finally, we demonstrate the extensibility of the model through a VSR algorithm integration with the Bidirectional Scale-Aware Upsampling, ensuring high-resolution results of arbitrary-scale factors without affecting the performance. Our comprehensive experiments on public benchmarks show promising results for video super-resolution of arbitrary-scale factors.
引用
收藏
页数:13
相关论文
共 50 条
[31]   Multi-scale generative adversarial network for image super-resolution [J].
Daihong, Jiang ;
Sai, Zhang ;
Lei, Dai ;
Yueming, Dai .
SOFT COMPUTING, 2022, 26 (08) :3631-3641
[32]   Video Super-Resolution via Bidirectional Recurrent Convolutional Networks [J].
Huang, Yan ;
Wang, Wei ;
Wang, Liang .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :1015-1028
[33]   BFRVSR: A Bidirectional Frame Recurrent Method for Video Super-Resolution [J].
Xue, Xiongxiong ;
Han, Zhenqi ;
Tong, Weiqin ;
Li, Mingqi ;
Liu, Lizhuang .
APPLIED SCIENCES-BASEL, 2020, 10 (23) :1-11
[34]   Multi-scale fractal residual network for image super-resolution [J].
Feng, Xinxin ;
Li, Xianguo ;
Li, Jianxiong .
APPLIED INTELLIGENCE, 2021, 51 (04) :1845-1856
[35]   Multi-scale feedback residual network for image super-resolution [J].
Xie, Yuanlun ;
Ou, Jie ;
Zhong, Jiahui ;
Jiang, Tianxiang ;
Ma, Tingsong .
SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (08)
[36]   Efficient lightweight network for video super-resolution [J].
Laigan Luo ;
Benshun Yi ;
Zhongyuan Wang ;
Peng Yi ;
Zheng He .
Neural Computing and Applications, 2024, 36 :883-896
[37]   Deep Unrolled Network for Video Super-Resolution [J].
Chiche, Benjamin Naoto ;
Woiselle, Arnaud ;
Frontera-Pons, Joana ;
Starck, Jean-Luc .
2020 TENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA), 2020,
[38]   Efficient lightweight network for video super-resolution [J].
Luo, Laigan ;
Yi, Benshun ;
Wang, Zhongyuan ;
Yi, Peng ;
He, Zheng .
NEURAL COMPUTING & APPLICATIONS, 2023, 36 (2) :883-896
[39]   Learning a Deep Dual Attention Network for Video Super-Resolution [J].
Li, Feng ;
Bai, Huihui ;
Zhao, Yao .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4474-4488
[40]   Dual Bidirectional Feature Enhancement Network for Continuous Space-Time Video Super-Resolution [J].
Luo, Laigan ;
Yi, Benshun ;
Wang, Zhongyuan ;
He, Zheng ;
Zhu, Chao .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2025, 11 :228-236