Saliency-Guided No-Reference Omnidirectional Image Quality Assessment via Scene Content Perceiving

被引:0
作者
Zhang, Youzhi [1 ]
Wan, Lifei [2 ]
Liu, Deyang [1 ,3 ,4 ]
Zhou, Xiaofei [5 ]
An, Ping [6 ]
Shan, Caifeng [3 ,7 ]
机构
[1] Anqing Normal Univ, Sch Comp & Informat, Anqing 246000, Peoples R China
[2] Ningbo Univ, Fac Informat Sci & Engn, Ningbo 315211, Peoples R China
[3] Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Peoples R China
[4] Anhui Normal Univ, Anhui Prov Key Lab Network & Informat Secur, Wuhu 241002, Anhui, Peoples R China
[5] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310061, Peoples R China
[6] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[7] Nanjing Univ, Sch Intelligence Sci & Technol, Nanjing 210023, Peoples R China
基金
中国国家自然科学基金;
关键词
Distortion; Feature extraction; Visualization; Measurement; Semantics; Quality assessment; Indexes; Virtual reality; Three-dimensional displays; Human visual system (HVS); hypernetwork; image no-reference quality assessment; omnidirectional images (OIs); saliency map; Transformer; CNN;
D O I
10.1109/TIM.2024.3485447
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to the widespread application of the virtual reality (VR) technique, omnidirectional image (OI) has attracted remarkable attention both from academia and industry. In contrast to a natural 2-D image, an OI contains 360(omicron) x180(omicron) panoramic content, which presents great challenges for no-reference quality assessment. In this article, we propose a saliency-guided no-reference OI quality assessment (OIQA) method based on scene content understanding. Inspired by the fact that humans use hierarchical representations to grade images, we extract multiscale features from each projected viewport. Then, we integrate the texture removal and background detection techniques to obtain the corresponding saliency map of each viewport, which is subsequently utilized to guide the multiscale feature fusion from the low-level feature to the high-level one. Furthermore, motivated by the human way of understanding content, we leverage a self-attention-based Transformer to build nonlocal mutual dependencies to perceive the variations of distortion and scene in each viewport. Moreover, we also propose a content perception hypernetwork to adaptively return weights and biases for quality regressor, which is conducive to understanding the scene content and learning the perception rule for the quality assessment procedure. Comprehensive experiments validate that the proposed method can achieve competitive performances on two available databases. The code is publicly available at https://github.com/ldyorchid/SCP-OIQA.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] CMDC-PCQA: No-Reference Point Cloud Quality Assessment via a Cross-Modal Deep-Coupling Framework
    Wu, Biqi
    Shang, Xiwu
    Zhao, Xiaoli
    Shen, Liquan
    An, Ping
    Ma, Siwei
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
  • [42] StyleAM: Perception-Oriented Unsupervised Domain Adaption for No-Reference Image Quality Assessment
    Lu, Yiting
    Li, Xin
    Liu, Jianzhao
    Chen, Zhibo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 2043 - 2058
  • [43] Reduced Reference Stereoscopic Image Quality Assessment Using Sparse Representation and Natural Scene Statistics
    Wan, Zhaolin
    Gu, Ke
    Zhao, Debin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (08) : 2024 - 2037
  • [44] No-reference image quality assessment based on sparse representation
    Yang, Xichen
    Sun, Quansen
    Wang, Tianshu
    NEURAL COMPUTING & APPLICATIONS, 2019, 31 (10) : 6643 - 6658
  • [45] No-reference image quality assessment based on sparse representation
    Xichen Yang
    Quansen Sun
    Tianshu Wang
    Neural Computing and Applications, 2019, 31 : 6643 - 6658
  • [46] Hybrid NSS features for no-reference image quality assessment
    Qin, Min
    Lv, Xiaoxin
    Chen, Xiaohui
    Wang, Weidong
    IET IMAGE PROCESSING, 2017, 11 (06) : 443 - 449
  • [47] Diffusion Model-Based Visual Compensation Guidance and Visual Difference Analysis for No-Reference Image Quality Assessment
    Wang, Zhaoyang
    Hu, Bo
    Zhang, Mingyang
    Li, Jie
    Li, Leida
    Gong, Maoguo
    Gao, Xinbo
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 263 - 278
  • [48] Feature rectification and enhancement for no-reference image quality assessment
    Wu, Wei
    Huang, Daoquan
    Yao, Yang
    Shen, Zhuonan
    Zhang, Hua
    Yan, Chenggang
    Zheng, Bolun
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 98
  • [49] A No-Reference Image Quality Assessment For Detecting Illumination Alteration
    Tafran, Cerine
    El-Abed, Mohamad
    Elkabani, Islam
    Osman, Ziad
    2019 10TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION SYSTEMS (ICICS), 2019, : 94 - 98
  • [50] No-reference image quality assessment by using convolutional neural networks via object detection
    Cao, Jingchao
    Wu, Wenhui
    Wang, Ran
    Kwong, Sam
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (11) : 3543 - 3554