TTA-EVF: Test-Time Adaptation for Event-based Video Frame Interpolation via Reliable Pixel and Sample Estimation

被引:4
作者
Cho, Hoonhee [1 ]
Kim, Taewoo [1 ]
Jeong, Yuhwan [1 ]
Yoon, Kuk-Jin [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
来源
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2024年
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/CVPR52733.2024.02428
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video Frame Interpolation (VFI), which aims at generating high-frame-rate videos from low-frame-rate inputs, is a highly challenging task. The emergence of bio-inspired sensors known as event cameras, which boast microsecond-level temporal resolution, has ushered in a transformative era for VFI. Nonetheless, the application of event-based VFI techniques in domains with distinct environments from the training data can be problematic. This is mainly because event camera data distribution can undergo substantial variations based on camera settings and scene conditions, presenting challenges for effective adaptation. In this paper, we propose a test-time adaptation method for event-based VFI to address the gap between the source and target domains. Our approach enables sequential learning in an online manner on the target domain, which only provides low-frame-rate videos. We present an approach that leverages confident pixels as pseudo ground-truths, enabling stable and accurate online learning from low-frame-rate videos. Furthermore, to prevent overfitting during the continuous online process where the same scene is encountered repeatedly, we propose a method of blending historical samples with current scenes. Extensive experiments validate the effectiveness of our method, both in cross-domain and continuous domain shifting setups. The code is available at https://github.com/Chohoonhee/TTA-EVF.
引用
收藏
页码:25701 / 25711
页数:11
相关论文
共 67 条
[1]   Depth-Aware Video Frame Interpolation [J].
Bao, Wenbo ;
Lai, Wei-Sheng ;
Ma, Chao ;
Zhang, Xiaoyun ;
Gao, Zhiyong ;
Yang, Ming-Hsuan .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3698-3707
[2]   MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement [J].
Bao, Wenbo ;
Lai, Wei-Sheng ;
Zhang, Xiaoyun ;
Gao, Zhiyong ;
Yang, Ming-Hsuan .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (03) :933-948
[3]  
Biesialska M., 2020, ARXIV
[4]   Parameter-free Online Test-time Adaptation [J].
Boudiaf, Malik ;
Mueller, Romain ;
Ben Ayed, Ismail ;
Bertinetto, Luca .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :8334-8343
[5]  
Cho H., 2023, P IEEE CVF INT C COM, P12492
[6]   Label-Free Event-based Object Recognition via Joint Learning with Image Reconstruction from Events [J].
Cho, Hoonhee ;
Kim, Hyeonseong ;
Chae, Yujeong ;
Yoon, Kuk-Jin .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, :19809-19820
[7]   Learning Adaptive Dense Event Stereo from the Image Domain [J].
Cho, Hoonhee ;
Cho, Jegyeong ;
Yoon, Kuk-Jin .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :17797-17807
[8]  
Cho H, 2022, AAAI CONF ARTIF INTE, P454
[9]   Selection and Cross Similarity for Event-Image Deep Stereo [J].
Cho, Hoonhee ;
Yoon, Kuk-Jin .
COMPUTER VISION - ECCV 2022, PT XXXII, 2022, 13692 :470-486
[10]  
Choi M, 2020, AAAI CONF ARTIF INTE, V34, P10663