MOTION VECTOR REFINEMENT FOR FRUC USING SALIENCY AND SEGMENTATION

被引:2
|
作者
Jacobson, Natan [1 ]
Lee, Yen-Lin [1 ]
Mahadevan, Vijay [1 ]
Vasconcelos, Nuno [1 ]
Nguyen, Truong Q. [1 ]
机构
[1] Univ Calif San Diego, ECE Dept, La Jolla, CA 92093 USA
来源
2010 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME 2010) | 2010年
关键词
Frame Rate Up-Conversion (FRUC); Discriminant Saliency; Motion Segmentation; Motion Refinement; Motion Compensated Frame Interpolation (MCFI); ALGORITHM;
D O I
10.1109/ICME.2010.5582574
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Motion-Compensated Frame Interpolation (MCFI) is a technique used extensively for increasing the temporal frequency of a video sequence. In order to obtain a high quality interpolation, the motion field between frames must be well-estimated. However, many current techniques for determining the motion are prone to errors in occlusion regions, as well as regions with repetitive structure. An algorithm is proposed for improving both the objective and subjective quality of MCFI by refining the motion vector field. A Discriminant Saliency classifier is employed to determine regions of the motion field which are most important to a human observer. These regions are refined using a multi-stage motion vector refinement which promotes candidates based on their likelihood given a local neighborhood. For regions which fall below the saliency threshold, frame segmentation is used to locate regions of homogeneous color and texture via Normalized Cuts. Motion vectors are promoted such that each homogeneous region has a consistent motion. Experimental results demonstrate an improvement over previous methods in both objective and subjective picture quality.
引用
收藏
页码:778 / 783
页数:6
相关论文
共 50 条
  • [41] Global motion estimation combining with motion segmentation
    Zhao, Yaxiang
    Fan, Xiaoping
    Liu, Shaoqiang
    FIFTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2013), 2013, 8878
  • [42] An Object Segmentation Method Based on Saliency Map and Spectral Clustering
    Chebbout, Samira
    Merouani, Hayet Farida
    2015 WORLD CONGRESS ON INFORMATION TECHNOLOGY AND COMPUTER APPLICATIONS (WCITCA), 2015,
  • [43] Motion-based segmentation of objects using overlapping temporal windows
    Dimitriou, Nikolaos
    Delopoulos, Anastasios
    IMAGE AND VISION COMPUTING, 2013, 31 (09) : 593 - 602
  • [44] Segmentation and recognition of human motion sequences using wearable inertial sensors
    Guo, Ming
    Wang, Zhelong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (16) : 21201 - 21220
  • [45] A Deep Learning Approach for Motion Segmentation Using An Optical Flow Technique
    Ghaywate, Pooja
    Vyas, Falguni
    Telang, Sneha
    Mangale, Supriya
    2019 10TH INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION AND NETWORKING TECHNOLOGIES (ICCCNT), 2019,
  • [46] Detecting Chest Compression Depth Using a Smartphone Camera and Motion Segmentation
    Meinich-Bache, Oyvind
    Engan, Kjersti
    Eftestol, Trygve
    Austvoll, Ivar
    IMAGE ANALYSIS, SCIA 2017, PT II, 2017, 10270 : 53 - 64
  • [47] Background pixel clissification for motion segmentation using mean shift algorithm
    Liang, Ying-Hong
    Wang, Zhi-Yan
    Xu, Xiao-Wei
    Cao, Xiao-Ye
    PROCEEDINGS OF 2007 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2007, : 1693 - 1698
  • [48] A NEW TRAJECTORY CLUSTERING ALGORITHM USING TEMPORAL SMOOTHNESS FOR MOTION SEGMENTATION
    Shi, F.
    Zhou, Z.
    Xiao, J.
    Wu, W.
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 4044 - 4048
  • [49] Motion Segmentation of RGB-D Sequences: Combining Semantic and Motion Information Using Statistical Inference
    Muthu, Sundaram
    Tennakoon, Ruwan
    Rathnayake, Tharindu
    Hoseinnezhad, Reza
    Suter, David
    Bab-Hadiashar, Alireza
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 5557 - 5570
  • [50] GPU accelerated face detection from low resolution surveillance videos using motion and skin color segmentation
    Mutneja, Vikram
    Singh, Satvir
    OPTIK, 2018, 157 : 1155 - 1165