No-Reference Video Quality Assessment Metric Using Spatiotemporal Features Through LSTM

被引:0
|
作者
Kwong, Ngai-Wing [1 ]
Tsang, Sik-Ho [2 ]
Chan, Yui-Lam [1 ]
Lun, Daniel Pak-Kong [1 ,2 ]
Lee, Tsz-Kwan [3 ]
机构
[1] Hong Kong Polytech Univ, Dept Elect & Informat Engn, Hong Kong, Peoples R China
[2] Ctr Adv Reliabil & Safety Ltd CAiRS, Hong Kong Sci Pk, Hong Kong, Peoples R China
[3] Deakin Univ, Sch Informat Technol, Deakin, Australia
关键词
video quality assessment; no reference; long short-term memory; spatiotemporal; pre-padding; masking layer;
D O I
10.1117/12.2590406
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nowadays, a precise video quality assessment (VQA) model is essential to maintain the quality of service (QoS). However, most existing VQA metrics are designed for specific purposes and ignore the spatiotemporal features of nature video. This paper proposes a novel general-purpose no-reference (NR) VQA metric adopting Long Short-Term Memory (LSTM) modules with the masking layer and pre-padding strategy, namely VQA-LSTM, to solve the above issues. First, we divide the distorted video into frames and extract some significant but also universal spatial and temporal features that could effectively reflect the quality of frames. Second, the data preprocessing stage and pre-padding strategy are used to process data to ease the training for our VQA-LSTM. Finally, a three-layer LSTM model incorporated with masking layer is designed to learn the sequence of spatial features as spatiotemporal features and learn the sequence of temporal features as the gradient of temporal features to evaluate the quality of videos. Two widely used VQA database, MCL-V and LIVE, are tested to prove the robustness of our VQA-LSTM, and the experimental results show that our VQA-LSTM has a better correlation with human perception than some state-of-the-art approaches.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] No-reference video quality metric based on artifact measurements
    Farias, MCQ
    Mitra, SK
    2005 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), VOLS 1-5, 2005, : 3593 - 3596
  • [42] No-Reference Video Quality Assessment using Recurrent Neural Networks
    Shahreza, Hatef Otroshi
    Amini, Arash
    Behroozi, Hamid
    2019 5TH IRANIAN CONFERENCE ON SIGNAL PROCESSING AND INTELLIGENT SYSTEMS (ICSPIS 2019), 2019,
  • [43] MDVQM: A novel multidimensional no-reference video quality metric for video transcoding
    Zhang, Fan
    Steinbach, Eckehard
    Zhang, Peng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2014, 25 (03) : 542 - 554
  • [44] No-Reference Image Quality Metric Based on Features Evaluation
    Alaql, Omar
    Ghazinour, Kambiz
    Lu, Cheng Chang
    2017 IEEE 7TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE IEEE CCWC-2017, 2017,
  • [45] A METRIC FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT FOR HD TV DELIVERY BASED ON SALIENCY MAPS
    Boujut, H.
    Benois-Pineau, J.
    Ahmed, T.
    Hadar, O.
    Bonnet, P.
    2011 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2011,
  • [46] No-reference screen content video quality assessment
    Li, Teng
    Min, Xiongkuo
    Zhu, Wenhan
    Xu, Yiling
    Zhang, Wenjun
    Displays, 2021, 69
  • [47] No-reference Video Quality Assessment on Mobile Devices
    Chen, Chen
    Song, Li
    Wang, Xiangwen
    Guo, Meng
    2013 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB), 2013,
  • [48] Analysis and Modelling of No-Reference Video Quality Assessment
    Tian, Yuan
    Zhu, Ming
    2009 INTERNATIONAL CONFERENCE ON COMPUTER AND AUTOMATION ENGINEERING, PROCEEDINGS, 2009, : 108 - 112
  • [49] No-reference Quality Assessment of Panoramic Video based on Spherical-domain Features
    Zhang, Yingxue
    Liu, Zizheng
    Chen, Zhenzhong
    Xu, Xiaozhong
    Liu, Shan
    2021 PICTURE CODING SYMPOSIUM (PCS), 2021, : 271 - 275
  • [50] A no-reference assessment model for quality of networked video based on features of packets loss
    Liu, Hechao
    Yang, Fuzheng
    Chang, Yilin
    Yuan, Hui
    Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2012, 46 (02): : 130 - 134