TMP: Temporal Motion Propagation for Online Video Super-Resolution

被引:1
|
作者
Zhang, Zhengqiang [1 ]
Li, Ruihuang [1 ]
Guo, Shi [2 ]
Cao, Yang [3 ]
Zhang, Lei [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[2] Shanghai Artificial Intelligence Lab, Shanghai 200232, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
关键词
Accuracy; Superresolution; Optical flow; Feature extraction; Streaming media; Image reconstruction; Estimation; Video super-resolution; motion compensation; deep neural networks;
D O I
10.1109/TIP.2024.3453048
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Online video super-resolution (online-VSR) highly relies on an effective alignment module to aggregate temporal information, while the strict latency requirement makes accurate and efficient alignment very challenging. Though much progress has been achieved, most of the existing online-VSR methods estimate the motion fields of each frame separately to perform alignment, which is computationally redundant and ignores the fact that the motion fields of adjacent frames are correlated. In this work, we propose an efficient Temporal Motion Propagation (TMP) method, which leverages the continuity of motion field to achieve fast pixel-level alignment among consecutive frames. Specifically, we first propagate the offsets from previous frames to the current frame, and then refine them in the neighborhood, significantly reducing the matching space and speeding up the offset estimation process. Furthermore, to enhance the robustness of alignment, we perform spatial-wise weighting on the warped features, where the positions with more precise offsets are assigned higher importance. Experiments on benchmark datasets demonstrate that the proposed TMP method achieves leading online-VSR accuracy as well as inference speed. The source code of TMP can be found at https://github.com/xtudbxk/TMP.
引用
收藏
页码:5014 / 5028
页数:15
相关论文
共 50 条
  • [1] Video Super-Resolution via a Spatio-Temporal Alignment Network
    Wen, Weilei
    Ren, Wenqi
    Shi, Yinghuan
    Nie, Yunfeng
    Zhang, Jingang
    Cao, Xiaochun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 1761 - 1773
  • [2] Online Video Super-Resolution With Convolutional Kernel Bypass Grafts
    Xiao, Jun
    Jiang, Xinyang
    Zheng, Ningxin
    Yang, Huan
    Yang, Yifan
    Yang, Yuqing
    Li, Dongsheng
    Lam, Kin-Man
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8972 - 8987
  • [3] Local-Global Fusion Network for Video Super-Resolution
    Su, Dewei
    Wang, Hua
    Jin, Longcun
    Sun, Xianfang
    Peng, Xinyi
    IEEE ACCESS, 2020, 8 : 172443 - 172456
  • [4] A Survey of Deep Learning Video Super-Resolution
    Baniya, Arbind Agrahari
    Lee, Tsz-Kwan
    Eklund, Peter W.
    Aryal, Sunil
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (04): : 2655 - 2676
  • [5] Temporal Consistency Learning of Inter-Frames for Video Super-Resolution
    Liu, Meiqin
    Jin, Shuo
    Yao, Chao
    Lin, Chunyu
    Zhao, Yao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (04) : 1507 - 1520
  • [6] Bidirectional Temporal-Recurrent Propagation Networks for Video Super-Resolution
    Han, Lei
    Fan, Cien
    Yang, Ye
    Zou, Lian
    ELECTRONICS, 2020, 9 (12) : 1 - 15
  • [7] Edge-based Motion and Intensity Prediction for Video Super-Resolution
    Wang, Jen-Wen
    Chiu, Ching-Te
    2014 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2014, : 1039 - 1043
  • [8] Efficient Video Super-Resolution via Hierarchical Temporal Residual Networks
    Liu, Zhi-Song
    Siu, Wan-Chi
    Chan, Yui-Lam
    IEEE ACCESS, 2021, 9 : 106049 - 106064
  • [9] Grouped Spatio-Temporal Alignment Network for Video Super-Resolution
    Lu, Mingxuan
    Zhang, Peng
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2193 - 2197
  • [10] VSRDiff: Learning Inter-Frame Temporal Coherence in Diffusion Model for Video Super-Resolution
    Liu, Linlin
    Niu, Lele
    Tang, Jun
    Ding, Yong
    IEEE ACCESS, 2025, 13 : 11447 - 11462