Saliency-Guided No-Reference Omnidirectional Image Quality Assessment via Scene Content Perceiving

被引:0
|
作者
Zhang, Youzhi [1 ]
Wan, Lifei [2 ]
Liu, Deyang [1 ,3 ,4 ]
Zhou, Xiaofei [5 ]
An, Ping [6 ]
Shan, Caifeng [3 ,7 ]
机构
[1] Anqing Normal Univ, Sch Comp & Informat, Anqing 246000, Peoples R China
[2] Ningbo Univ, Fac Informat Sci & Engn, Ningbo 315211, Peoples R China
[3] Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Peoples R China
[4] Anhui Normal Univ, Anhui Prov Key Lab Network & Informat Secur, Wuhu 241002, Anhui, Peoples R China
[5] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310061, Peoples R China
[6] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[7] Nanjing Univ, Sch Intelligence Sci & Technol, Nanjing 210023, Peoples R China
基金
中国国家自然科学基金;
关键词
Distortion; Feature extraction; Visualization; Measurement; Semantics; Quality assessment; Indexes; Virtual reality; Three-dimensional displays; Human visual system (HVS); hypernetwork; image no-reference quality assessment; omnidirectional images (OIs); saliency map; Transformer; CNN;
D O I
10.1109/TIM.2024.3485447
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to the widespread application of the virtual reality (VR) technique, omnidirectional image (OI) has attracted remarkable attention both from academia and industry. In contrast to a natural 2-D image, an OI contains 360(omicron) x180(omicron) panoramic content, which presents great challenges for no-reference quality assessment. In this article, we propose a saliency-guided no-reference OI quality assessment (OIQA) method based on scene content understanding. Inspired by the fact that humans use hierarchical representations to grade images, we extract multiscale features from each projected viewport. Then, we integrate the texture removal and background detection techniques to obtain the corresponding saliency map of each viewport, which is subsequently utilized to guide the multiscale feature fusion from the low-level feature to the high-level one. Furthermore, motivated by the human way of understanding content, we leverage a self-attention-based Transformer to build nonlocal mutual dependencies to perceive the variations of distortion and scene in each viewport. Moreover, we also propose a content perception hypernetwork to adaptively return weights and biases for quality regressor, which is conducive to understanding the scene content and learning the perception rule for the quality assessment procedure. Comprehensive experiments validate that the proposed method can achieve competitive performances on two available databases. The code is publicly available at https://github.com/ldyorchid/SCP-OIQA.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] No-reference stereoscopic image quality assessment based on saliency-guided binocular feature consolidation
    Xu, Xiaogang
    Zhao, Yang
    Ding, Yong
    ELECTRONICS LETTERS, 2017, 53 (22) : 1468 - 1469
  • [2] No-Reference Light Field Image Quality Assessment Exploiting Saliency
    Lamichhane, Kamal
    Neri, Michael
    Battisti, Federica
    Paudyal, Pradip
    Carli, Marco
    IEEE TRANSACTIONS ON BROADCASTING, 2023, 69 (03) : 790 - 800
  • [3] Saliency-guided convolution neural network-transformer fusion network for no-reference image quality assessment
    Wu, Lipeng
    Cui, Ziguan
    Gan, Zongliang
    Tang, Guijin
    Liu, Feng
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (06)
  • [4] HVS-Based Perception-Driven No-Reference Omnidirectional Image Quality Assessment
    Liu, Yun
    Yin, Xiaohua
    Wang, Yan
    Yin, Zixuan
    Zheng, Zhi
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [5] HVS-Based Perception-Driven No-Reference Omnidirectional Image Quality Assessment
    Liu, Yun
    Yin, Xiaohua
    Wang, Yan
    Yin, Zixuan
    Zheng, Zhi
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [6] HVS-Based Perception-Driven No-Reference Omnidirectional Image Quality Assessment
    Liu, Yun
    Yin, Xiaohua
    Wang, Yan
    Yin, Zixuan
    Zheng, Zhi
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [7] Eye Scanpath Prediction-Based No-Reference Quality Assessment of Omnidirectional Images
    Hu, Huixin
    Shao, Feng
    Chen, Hangwei
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [8] No-reference Omnidirectional Image Quality Assessment Based on Joint Network
    Zhang, Chaofan
    Liu, Shiguang
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 943 - 951
  • [9] No-Reference Video Quality Assessment Using Natural Spatiotemporal Scene Statistics
    Dendi, Sathya Veera Reddy
    Channappayya, Sumohana S.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 5612 - 5624
  • [10] Domain Fingerprints for No-Reference Image Quality Assessment
    Xia, Weihao
    Yang, Yujiu
    Xue, Jing-Hao
    Xiao, Jing
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (04) : 1332 - 1341