Motion-based frame interpolation for film and television effects

被引:9
作者
Kokaram, Anil [1 ]
Singh, Davinder [1 ]
Robinson, Simon [2 ]
Kelly, Damien [3 ]
Collis, Bill [4 ]
Libreri, Kim [5 ]
机构
[1] Trinity Coll Dublin, Dublin, Ireland
[2] The Foundry, London, England
[3] Google Res, Mountain View, CA USA
[4] Disgiuse One, London, England
[5] EPIC Games, San Francisco, CA USA
关键词
RATE UP-CONVERSION; COMPENSATION;
D O I
10.1049/iet-cvi.2019.0814
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Frame interpolation is the process of synthesising a new frame in-between existing frames in an image sequence. It has emerged as a key algorithmic module in motion picture effects. In the context of this special issue, this study provides a review of the technology used to create in-between frames and presents a Bayesian framework that generalises frame interpolation algorithms using the concept of motion interpolation. Unlike existing literature in this area, the authors also compare performance using the top industrial toolkits used in the post production industry. They find that all successful techniques employ motion-based interpolation, and the commercial version of the Bayesian approach performs best. Another goal of this study is to compare the performance gains with recent convolutional neural network (CNN) algorithms against the traditional explicit model-based approaches. They find that CNNs do not clearly outperform the explicit motion-based techniques, and require significant compute resources, but provide complementary improvements in certain types of sequences.
引用
收藏
页码:323 / 338
页数:16
相关论文
共 61 条
[1]   A Fast 4K Video Frame Interpolation Using a Hybrid Task-Based Convolutional Neural Network [J].
Ahn, Ha-Eun ;
Jeong, Jinwoo ;
Kim, Je Woo .
SYMMETRY-BASEL, 2019, 11 (05)
[2]  
[Anonymous], 1998, Motion Picture Restoration: Digital Algorithms for Artefact Suppression in Degraded Motion Picture Film and Video
[3]  
Bai W, 2012, IEEE INT SYMP CIRC S, P500, DOI 10.1109/ISCAS.2012.6272075
[4]   A database and evaluation methodology for optical flow [J].
Baker, Simon ;
Scharstein, Daniel ;
Lewis, J. P. ;
Roth, Stefan ;
Black, Michael J. ;
Szeliski, Richard .
2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1-6, 2007, :588-595
[5]   Depth-Aware Video Frame Interpolation [J].
Bao, Wenbo ;
Lai, Wei-Sheng ;
Ma, Chao ;
Zhang, Xiaoyun ;
Gao, Zhiyong ;
Yang, Ming-Hsuan .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3698-3707
[6]   MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement [J].
Bao, Wenbo ;
Lai, Wei-Sheng ;
Zhang, Xiaoyun ;
Gao, Zhiyong ;
Yang, Ming-Hsuan .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (03) :933-948
[7]  
Biswas M, 2002, CONF REC ASILOMAR C, P492
[8]  
Brooks T., 2019, IEEE C VIS PATT REC
[9]   A method for motion adaptive frame rate up-conversion [J].
Castagno, R ;
Haavisto, P ;
Ramponi, G .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 1996, 6 (05) :436-446
[10]   Frame-rate up-conversion using transmitted true motion vectors [J].
Chen, YK ;
Vetro, A ;
Sun, HF ;
Kung, SY .
1998 IEEE SECOND WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, 1998, :622-627