MaD-DLS: Mean and Deviation of Deep and Local Similarity for Image Quality Assessment

被引:50
|
作者
Sim, Kyohoon [1 ]
Yang, Jiachen [1 ]
Lu, Wen [2 ]
Gao, Xinbo [3 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
[3] Xidian Univ, Sch Elect Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Visualization; Distortion; Image quality; Convolution; Standards; Neurons; Image quality assessment; deep feature map; weighted mean pooling; standard deviation pooling; STRUCTURAL SIMILARITY; INFORMATION; INDEX;
D O I
10.1109/TMM.2020.3037482
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
When human visual system (HVS) looks at a scene, it extracts various features from the image about the scene to understand it. The extracted features are compared with the stored memory on the analogous scene to judge their similarity [1]. By analyzing to the similarity, HVS understands the scene presented on eyes. Based on the neurobiological basis, we propose a 2D full reference (FR) image quality assessment (IQA) method, named mean and deviation of deep and local similarity (MaD-DLS) that compares similarity between many original and distorted deep feature maps from convolutional neural networks (CNNs). MaD-DLS uses a deep learning algorithm, but since it uses the convolutional layers of a pre-trained model, it is free from training. For pooling of local quality scores within a deep similarity map, we employ two important descriptive statistics, (weighted) mean and standard deviation and name it mean and deviation (MaD) pooling. The two statistics each have the physical meaning: the weighted mean reflects effect of visual saliency on quality, whereas the standard deviation reflects effect of distortion distribution within the image on it. Experimental results show that MaD-DLS is superior or competitive to the existing methods and the MaD pooling is effective. The MATLAB source code of MaD-DLS will be available online soon.
引用
收藏
页码:4037 / 4048
页数:12
相关论文
共 50 条
  • [21] Edge Strength Similarity for Image Quality Assessment
    Zhang, Xuande
    Feng, Xiangchu
    Wang, Weiwei
    Xue, Wufeng
    IEEE SIGNAL PROCESSING LETTERS, 2013, 20 (04) : 319 - 322
  • [22] Efficient image structural similarity quality assessment method using image regularised feature
    Li, Yajing
    Huang, Baoxiang
    Yang, Huan
    Hou, Guojia
    Zhang, Pengfei
    Duan, Jinming
    IET IMAGE PROCESSING, 2020, 14 (16) : 4401 - 4411
  • [23] Perceptual image quality assessment based on structural similarity and visual masking
    Fei, Xuan
    Xiao, Liang
    Sun, Yubao
    Wei, Zhihui
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2012, 27 (07) : 772 - 783
  • [24] Perceptual image quality assessment using phase deviation sensitive energy features
    Saha, Ashirbani
    Wu, Q. M. Jonathan
    SIGNAL PROCESSING, 2013, 93 (11) : 3182 - 3191
  • [25] Blind Dehazed Image Quality Assessment: A Deep CNN-Based Approach
    Lv, Xiao
    Xiang, Tao
    Yang, Ying
    Liu, Hantao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 9410 - 9424
  • [26] Color Image Quality Assessment Based on Structural Similarity
    卢芳芳
    赵群飞
    杨根科
    Journal of Donghua University(English Edition), 2010, 27 (04) : 443 - 450
  • [27] Image quality assessment based on the perceived structural similarity index of an image
    Yao, Juncai
    Shen, Jing
    Yao, Congying
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (05) : 9385 - 9409
  • [28] Full-reference image quality assessment by combining global and local distortion measures
    Saha, Ashirbani
    Wu, Q. M. Jonathan
    SIGNAL PROCESSING, 2016, 128 : 186 - 197
  • [29] Deep perceptual similarity and Quality Assessment
    2023 6TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION AND IMAGE ANALYSIS, IPRIA, 2023,
  • [30] Deep Virtual Reality Image Quality Assessment With Human Perception Guider for Omnidirectional Image
    Kim, Hak Gu
    Lim, Heoun-Taek
    Ro, Yong Man
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (04) : 917 - 928