Black-box adversarial attacks against image quality assessment models

被引:7
作者
Ran, Yu [1 ]
Zhang, Ao-Xiang [1 ]
Li, Mingjie [1 ]
Tang, Weixuan [2 ]
Wang, Yuan-Gen [2 ]
机构
[1] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
[2] Guangzhou Univ, Inst Artificial Intelligence, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Image quality assessment; Adversarial attack; Black-box attack;
D O I
10.1016/j.eswa.2024.125415
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The problem of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation. However, the vulnerabilities of NR-IQA models to the adversarial attacks have not been thoroughly studied for model refinement. This paper aims to investigate the potential loopholes of NR-IQA models via black-box adversarial attacks. Specifically, we first formulate the attack problem as maximizing the deviation between the estimated quality scores of original and perturbed images, while restricting the perturbed image distortions for visual quality preservation. Under such formulation, we then design a Bi-directional loss function to mislead the estimated quality scores of adversarial examples towards an opposite direction with maximum deviation. On this basis, we finally develop an efficient and effective black-box attack method for NR-IQA models based on a random search paradigm. Comprehensive experiments on three benchmark datasets show that all evaluated NR-IQA models are significantly vulnerable to the proposed attack method. After being attacked, the average change rates in terms of two well-known IQA performance metrics achieved by victim models reach 97% and 101%, respectively. In addition, our attack method also outperforms a newly introduced black-box attack approach on IQA models. We also observe that the generated perturbations are not transferable, which points out a new research direction in NR-IQA community. The source code is available at https://github.com/GZHU-DVL/AttackIQA.
引用
收藏
页数:11
相关论文
共 50 条
[31]   Black-box adversarial attacks through speech distortion for speech emotion recognition [J].
Jinxing Gao ;
Diqun Yan ;
Mingyu Dong .
EURASIP Journal on Audio, Speech, and Music Processing, 2022
[32]   Timing black-box attacks: Crafting adversarial examples through timing leaks against dnns on embedded devices [J].
Nakai T. ;
Suzuki D. ;
Fujino T. .
IACR Transactions on Cryptographic Hardware and Embedded Systems, 2021, 2021 (03) :149-175
[33]   attackGAN: Adversarial Attack against Black-box IDS using Generative Adversarial Networks [J].
Zhao, Shuang ;
Li, Jing ;
Wang, Jianmin ;
Zhang, Zhao ;
Zhu, Lin ;
Zhang, Yong .
2020 INTERNATIONAL CONFERENCE ON IDENTIFICATION, INFORMATION AND KNOWLEDGE IN THE INTERNET OF THINGS (IIKI2020), 2021, 187 :128-133
[34]   A comprehensive transplanting of black-box adversarial attacks from multi-class to multi-label models [J].
Chen, Zhijian ;
Zhou, Qi ;
Liu, Yujiang ;
Luo, Wenjian .
COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (04)
[35]   A black-box adversarial attack strategy with adjustable sparsity and generalizability for deep image classifiers [J].
Ghosh, Arka ;
Mullick, Sankha Subhra ;
Datta, Shounak ;
Das, Swagatam ;
Das, Asit Kr ;
Mallipeddi, Rammohan .
PATTERN RECOGNITION, 2022, 122
[36]   Exploring Vulnerabilities of No-Reference Image Quality Assessment Models: A Query-Based Black-Box Method [J].
Yang, Chenxi ;
Liu, Yujia ;
Li, Dingquan ;
Jiang, Tingting .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (12) :12715-12729
[37]   SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK [J].
Ji, Yimu ;
Ding, Jianyu ;
Chen, Zhiyu ;
Wu, Fei ;
Zhang, Chi ;
Sun, Yiming ;
Sun, Jing ;
Liu, Shangdong .
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, :636-640
[38]   Disappeared Face: A Physical Adversarial Attack Method on Black-Box Face Detection Models [J].
Zhou, Chuan ;
Jing, Huiyun ;
He, Xin ;
Wang, Liming ;
Chen, Kai ;
Ma, Duohe .
INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2021), PT I, 2021, 12918 :119-135
[39]   Adversarial attacks on video quality assessment models [J].
Hu, Zongyao ;
Liu, Lixiong ;
Sang, Qingbing ;
Wang, Chongwen .
KNOWLEDGE-BASED SYSTEMS, 2024, 293
[40]   Fuzzing-based hard-label black-box attacks against machine learning models [J].
Qin, Yi ;
Yue, Chuan .
COMPUTERS & SECURITY, 2022, 117