Inter-frame feature fusion enhanced spatio-temporal consistent video inpainting with sample-based techniques and adaptive local search

被引:0
作者
Han, Ruyi [1 ]
Liao, Shenghai [2 ]
Fu, Shujun [2 ]
Zhou, Yuanfeng [3 ]
Li, Yuliang [4 ]
Han, Hongbin [5 ,6 ]
机构
[1] Taiyuan Univ Sci & Technol, Sch Appl Sci, Taiyuan 030024, Peoples R China
[2] Shandong Univ, Sch Math, Jinan 250100, Peoples R China
[3] Shandong Univ, Sch Software, Jinan 250101, Peoples R China
[4] Shandong Univ, Dept Intervent Med, Hosp 2, Jinan 250033, Peoples R China
[5] Peking Univ Third Hosp, Dept Radiol, Beijing 100191, Peoples R China
[6] Beijing Key Lab Magnet Resonance Imaging Equipment, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
Video inpainting; Image fusion; Extension frame; Priority scheme; Adaptive local search window; Matching criterion; OBJECT REMOVAL; EDGE-DETECTION; IMAGE;
D O I
10.1016/j.cam.2025.116523
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Video inpainting is an ill-posed inverse problem in image processing focused on eliminating unwanted objects and restoring damaged or corrupted regions within a video sequence. During video transmission, challenges such as network bandwidth limitations and transmission errors can lead to the loss of crucial information in specific frames. To address this issue, we propose a novel and effective sample-based video inpainting method in this paper. Firstly, an extension frame search strategy is introduced, utilizing image fusion techniques to integrate information from extension frames for more effective restoration of damaged areas and ensuring temporal consistency in the repair process. Secondly, a new priority scheme is proposed to accurately measure the inpainting order of image blocks and to effectively inpaint the structure and texture of video frames. An adaptive local search window is then designed to find the optimal matching block for a target block, ensuring spatial continuity in restoration and reducing the time cost of the proposed algorithm. Additionally, to enhance matching accuracy, a Weighted Sum of Squared Differences (WSSD) matching method based on a facet model is presented in this work. Finally, the proposed method is tested on videos with various backgrounds and damaged blocks of different sizes and locations. Experimental results indicate that our method outperforms six state-of-the-art video inpainting methods both qualitatively and quantitatively.
引用
收藏
页数:17
相关论文
empty
未找到相关数据