Textural Detail Preservation Network for Video Frame Interpolation

被引:0
|
作者
Yoon, Kihwan [1 ,2 ]
Huh, Jingang [1 ]
Kim, Yong Han [2 ]
Kim, Sungjei [1 ]
Jeong, Jinwoo [1 ]
机构
[1] Korea Elect Technol Inst KETI, Seongnam Si 13488, Gyeonggi Do, South Korea
[2] Univ Seoul, Sch Elect & Comp Engn, Seoul 02504, South Korea
关键词
Video frame interpolation; textural detail preservation; perceptual loss; synthesis network; ENHANCEMENT;
D O I
10.1109/ACCESS.2023.3294964
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The subjective image quality of the Video Frame Interpolation (VFI) result depends on whether image features such as edges, textures and blobs are preserved. With the development of deep learning, various algorithms have been proposed and the objective results of VFI have significantly improved. Moreover, perceptual loss has been used in a method that enhances subjective quality by preserving the features of the image, and as a result, the subjective quality is improved. Despite the quality enhancements achieved in VFI, no analysis has been performed to preserve specific features in the interpolated frames. Therefore, we conducted an analysis to preserve textural detail, such as film grain noise, which can represent the texture of an image, and weak textures, such as droplets or particles. Based on our analysis, we identify the importance of synthesis networks in textural detail preservation and propose an enhanced synthesis network, the Textural Detail Preservation Network (TDPNet). Furthermore, based on our analysis, we propose a Perceptual Training Method (PTM) to address the issue of degraded Peak Signal-to-Noise Ratio (PSNR) when simply applying perceptual loss and to preserve more textural detail. We also propose a Multi-scale Resolution Training Method (MRTM) to address the issue of poor performance when testing datasets with a resolution different from that of the training dataset. The experimental results of the proposed network was outperformed in LPIPS and DISTS on the Vimeo90K, HD, SNU-FILM and UVG datasets compared with the state-of-the-art VFI algorithms, and the subjective results were also outperformed. Furthermore, applying PTM improved PSNR results by an average of 0.293dB compared to simply applying perceptual loss.
引用
收藏
页码:71994 / 72006
页数:13
相关论文
共 50 条
  • [31] Hybrid Warping Fusion for Video Frame Interpolation
    Li, Yu
    Zhu, Ye
    Li, Ruoteng
    Wang, Xintao
    Luo, Yue
    Shan, Ying
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (12) : 2980 - 2993
  • [32] TTVFI: Learning Trajectory-Aware Transformer for Video Frame Interpolation
    Liu, Chengxu
    Yang, Huan
    Fu, Jianlong
    Qian, Xueming
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 4728 - 4741
  • [33] BVI-VFI: A Video Quality Database for Video Frame Interpolation
    Danier, Duolikun
    Zhang, Fan
    Bull, David R.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 6004 - 6019
  • [34] BVI-VFI: A Video Quality Database for Video Frame Interpolation
    Danier, Duolikun
    Zhang, Fan
    Bull, David R.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 6004 - 6019
  • [35] Spike-EFI: Spiking Neural Network for Event-Based Video Frame Interpolation
    Wu, Dong-Sheng
    Ma, De
    IMAGE AND VIDEO TECHNOLOGY, PSIVT 2023, 2024, 14403 : 312 - 325
  • [36] Video Frame Interpolation Based on Symmetric and Asymmetric Motions
    Choi, Whan
    Koh, Yeong Jun
    Kim, Chang-Su
    IEEE ACCESS, 2023, 11 : 22394 - 22403
  • [37] Robust Video Frame Interpolation With Exceptional Motion Map
    Park, Minho
    Kim, Hak Gu
    Lee, Sangmin
    Ro, Yong Man
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (02) : 754 - 764
  • [38] Multi-Scale Warping for Video Frame Interpolation
    Choi, Whan
    Koh, Yeong Jun
    Kim, Chang-Su
    IEEE ACCESS, 2021, 9 : 150470 - 150479
  • [39] FloLPIPS: A Bespoke Video Quality Metric for Frame Interpolation
    Danier, Duolikun
    Zhang, Fan
    Bull, David
    2022 PICTURE CODING SYMPOSIUM (PCS), 2022, : 283 - 287
  • [40] Video Frame Interpolation for Large Motion with Generative Prior
    Huang, Yuheng
    Jia, Xu
    Su, Xin
    Zhang, Lu
    Li, Xiaomin
    Wang, Qinghe
    Lu, Huchuan
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT X, 2025, 15040 : 402 - 415