StableVQA: A Deep No-Reference Quality Assessment Model for Video Stability

被引:4
作者
Kou, Tengchuan [1 ]
Liu, Xiaohong [1 ]
Sun, Wei [1 ]
Jia, Jun [1 ]
Min, Xiongkuo [1 ]
Zhai, Guangtao [1 ]
Liu, Ning [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
video database; video quality assessment; deep learning; feature fusion;
D O I
10.1145/3581783.3611860
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video shakiness is an unpleasant distortion of User Generated Content (UGC) videos, which is usually caused by the unstable hold of cameras. In recent years, many video stabilization algorithms have been proposed, yet no specific and accurate metric enables comprehensively evaluating the stability of videos. Indeed, most existing quality assessment models evaluate video quality as a whole without specifically taking the subjective experience of video stability into consideration. Therefore, these models cannot measure the video stability explicitly and precisely when severe shakes are present. In addition, there is no large-scale video database in public that includes various degrees of shaky videos with the corresponding subjective scores available, which hinders the development of Video Quality Assessment for Stability (VQA-S). To this end, we build a new database named StableDB that contains 1, 952 diversely-shaky UGC videos, where each video has a Mean Opinion Score (MOS) on the degree of video stability rated by 34 subjects. Moreover, we elaborately design a novel VQA-S model named StableVQA, which consists of three feature extractors to acquire the optical flow, semantic, and blur features respectively, and a regression layer to predict the final stability score. Extensive experiments demonstrate that the StableVQA achieves a higher correlation with subjective opinions than the existing VQA-S models and generic VQA models. The database and codes are available at https://github.com/QMME/StableVQA.
引用
收藏
页码:1066 / 1076
页数:11
相关论文
共 42 条
[1]  
Antkowiak J., 2000, Final report from the video quality experts group on the validation of objective models of video quality assessment march 2000
[2]   SIFT features tracking for video stabilization [J].
Battiato, Sebastiano ;
Gallo, Giovanni ;
Puglisi, Giovanni ;
Scellato, Salvatore .
14TH INTERNATIONAL CONFERENCE ON IMAGE ANALYSIS AND PROCESSING, PROCEEDINGS, 2007, :825-+
[3]   Deep Iterative Frame Interpolation for Full-frame Video Stabilization [J].
Choi, Jinsoo ;
Kweon, In So .
ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (01)
[4]  
Dong Y., 2023, P 31 ACM INT C MULT
[5]  
Gao Yixuan, 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), P1474, DOI 10.1109/CVPRW59228.2023.00152
[6]   In-Capture Mobile Video Distortions: A Study of Subjective Behavior and Objective Algorithms [J].
Ghadiyaram, Deepti ;
Pan, Janice ;
Bovik, Alan C. ;
Moorthy, Anush Krishna ;
Panda, Prasanjit ;
Yang, Kai-Chieh .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (09) :2061-2077
[7]   Video stabilization: Overview, challenges and perspectives [J].
Guilluy, Wilko ;
Oudre, Laurent ;
Beghdadi, Azeddine .
SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 90
[8]  
Hosu Vlad, 2017, Int. Conf. Quality of Multimedia Experience, P1
[9]  
Jaesung Rim, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12370), P184, DOI 10.1007/978-3-030-58595-2_12
[10]  
James Jerin Geo, 2023, P IEEE CVF WINT C AP, P5078