Exploring Vulnerabilities of No-Reference Image Quality Assessment Models: A Query-Based Black-Box Method

被引:0
作者
Yang, Chenxi [1 ]
Liu, Yujia [2 ]
Li, Dingquan [3 ]
Jiang, Tingting [2 ]
机构
[1] Peking Univ, Natl Engn Res Ctr Visual Technol, Sch Math Sci, Sch Comp Sci,State Key Lab Multimedia Informat Pro, Beijing, Peoples R China
[2] Peking Univ, Natl Engn Res Ctr Visual Technol, Sch Comp Sci, State Key Lab Multimedia Informat Proc, Beijing, Peoples R China
[3] Pengcheng Lab, Dept Networked Intelligence, Shenzhen, Peoples R China
关键词
Closed box; Perturbation methods; Task analysis; Image quality; Robustness; Computational modeling; Glass box; No-reference image quality assessment; black-box attack; query-based attack; robustness; NOTICEABLE-DIFFERENCE; EDGE;
D O I
10.1109/TCSVT.2024.3435865
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
No-Reference Image Quality Assessment (NR-IQA) aims to predict image quality scores consistent with human perception without relying on pristine reference images, serving as a crucial component in various visual tasks. Ensuring the robustness of NR-IQA methods is vital for reliable comparisons of different image processing techniques and consistent user experiences in recommendations. The attack methods for NR-IQA provide a powerful instrument to test the robustness of NR-IQA. However, current attack methods of NR-IQA heavily rely on the gradient of the NR-IQA model, leading to limitations when the gradient information is unavailable. In this paper, we present a pioneering query-based black box attack against NR-IQA methods. We propose the concept of score boundary and leverage an adaptive iterative approach with multiple score boundaries. Meanwhile, the initial attack directions are also designed to leverage the characteristics of the Human Visual System (HVS). Experiments show our method outperforms all compared state-of-the-art attack methods and is far ahead of previous black-box methods. The effective NR-IQA model DBCNN suffers a Spearman's rank-order correlation coefficient (SROCC) decline of 0.6381 attacked by our method, revealing the vulnerability of NR-IQA models to black-box attacks. The proposed attack method also provides a potent tool for further exploration into NR-IQA robustness.
引用
收藏
页码:12715 / 12729
页数:15
相关论文
共 55 条
  • [1] Su S., Et al., Blindly assess image quality in the wild guided by a selfadaptive hyper network, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3667-3676, (2020)
  • [2] Zhang W., Ma K., Yan J., Deng D., Wang Z., Blind image quality assessment using a deep bilinear convolutional neural network, IEEE Trans. Circuits Syst. Video Technol., 30, 1, pp. 36-47, (2020)
  • [3] Gu J., Cai H., Chen H., Ye X., Ren J., Dong C., Image Quality Assessment for Perceptual Image Restoration: A New Dataset, Benchmark and Metric, (2020)
  • [4] Deng Y., Chen K., Image Quality Analysis for Searches, (2014)
  • [5] Zhang W., Et al., Perceptual attacks of no-reference image quality models with human-in-The-loop, Proc. 36th Conf. Neural Inf. Process. Syst., pp. 2916-2929, (2022)
  • [6] Shumitskaya E., Antsiferova A., Vatolin D.S., Universal perturbation attack on differentiable no-reference image-and video-quality metrics, Proc. Brit. Mach. Vis. Conf., pp. 1-12, (2022)
  • [7] Korhonen J., You J., Adversarial attacks against blind image quality assessment models, Proc. 2nd Workshop Quality Exper. Vis. Multimedia Appl., pp. 3-11, (2022)
  • [8] Sang Q., Zhang H., Liu L., Wu X., Bovik A.C., On the generation of adversarial examples for image quality assessment, Vis. Comput., 40, 5, pp. 3183-3198, (2024)
  • [9] Guo C., Gardner J., You Y., Wilson A.G., Weinberger K., Simple black-box adversarial attacks, Proc. Int. Conf. Mach. Learn., pp. 2484-2493, (2019)
  • [10] Li X.-C., Zhang X.-Y., Yin F., Liu C.-L., Decision-based adversarial attack with frequency mixup, IEEE Trans. Inf. Forensics Security, 17, pp. 1038-1052, (2022)