Multiple hallucinated deep network for image quality assessment

被引:0
|
作者
Javidian, Z. [1 ]
Hashemi, S. [1 ]
Fard, S. M. Hazrati [1 ]
机构
[1] Shiraz Univ, Dept Comp Sci & Engn, Molla Sadra Ave, Shiraz, Iran
关键词
Image quality; assessment; Deep learning; Generative adversarial; network; Distribution alignment;
D O I
10.24200/sci.2022.59243.6134
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Image Quality Assessment (IQA) refers to quantitative evaluation of the human's perception of a distorted image quality. Blind IQA (BIQA) is a type of IQA that does not include any reference or information about the distortion. Since the human brain has no information about the distortion type, BIQA is more reliable and compatible with the real world. Traditional methods in this realm used an expert opinion, such as Natural Scene Statistics (NSS), to measure the distance of a distorted image from the distribution of pristine samples. In recent years, many deep learning-based IQA methods have been proposed to use the ability of deep models in automatic feature extraction. However, the main challenge of these models is the need for many annotated training samples. In this paper, through the inspiration of Human Visual System (HVS), a Generative Adversarial Network (GAN)-based approach was proposed to address this problem. To this end, multiple images were sampled from a submanifold of the pristine data manifold by conditioning the network on the corresponding distorted image. In addition, NSS features were employed to improve the network training and conduct the training process on the right track. The testing results of the proposed method on three datasets confirmed its superiority over other the state-of-the-art methods. (c) 2023 Sharif University of Technology. All rights reserved.
引用
收藏
页码:492 / 505
页数:14
相关论文
共 50 条
  • [41] Anatomical Feature-Based Lung Ultrasound Image Quality Assessment Using Deep Convolutional Neural Network
    Ravishankar, Surya M.
    Tsumura, Ryosuke
    Hardin, John W.
    Hoffmann, Beatrice
    Zhang, Ziming
    Zhang, Haichong K.
    INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS 2021), 2021,
  • [42] Cone Beam Computed Tomography Image Quality Improvement Using a Deep Convolutional Neural Network
    Kida, Satoshi
    Nakamoto, Takahiro
    Nakano, Masahiro
    Nawa, Kanabu
    Haga, Akihiro
    Kotoku, Jun'ichi
    Yamashita, Hideomi
    Nakagawa, Keiichi
    CUREUS, 2018, 10 (04):
  • [43] Image quality assessment based on adaptive multiple Skyline query
    He, Siyuan
    Liu, Zezheng
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 80
  • [44] Image Recognition and Safety Risk Assessment of Traffic Sign Based on Deep Convolution Neural Network
    Chen, Rui
    Hei, Lei
    Lai, Yi
    IEEE ACCESS, 2020, 8 (08): : 201799 - 201805
  • [45] A deep learning method for medical image quality assessment based on phase congruency and radiomics features
    Zhang, Xinhong
    Zhao, Jiayin
    Zhang, Fan
    Chen, Xiaopan
    OPTICS AND LASERS IN ENGINEERING, 2025, 186
  • [46] Image quality assessment for advertising applications based on neural network
    Fong, Cher-Min
    Wang, Hui-Wen
    Kuo, Chien-Hung
    Hsieh, Pei-Chun
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 63
  • [47] Assessment of diagnostic image quality of computed tomography (CT) images of the lung using deep learning
    Lee, John H.
    Grant, Byron R.
    Chung, Jonathan H.
    Reiser, Ingrid
    Giger, Maryellen
    MEDICAL IMAGING 2018: PHYSICS OF MEDICAL IMAGING, 2018, 10573
  • [48] A multimodal dense convolution network for blind image quality assessment
    Chockalingam, Nandhini
    Murugan, Brindha
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2023, 24 (11) : 1601 - 1615
  • [49] Water quality assessment for aquaculture using deep neural network
    Arabelli, Rajeshwarrao
    Bernatin, T.
    Veeramsetty, Venkataramana
    DESALINATION AND WATER TREATMENT, 2025, 321
  • [50] Visual Interaction Perceptual Network for Blind Image Quality Assessment
    Wang, Xiaoqi
    Xiong, Jian
    Lin, Weisi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8958 - 8971