Black-box adversarial attacks against image quality assessment models

被引:7
作者
Ran, Yu [1 ]
Zhang, Ao-Xiang [1 ]
Li, Mingjie [1 ]
Tang, Weixuan [2 ]
Wang, Yuan-Gen [2 ]
机构
[1] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
[2] Guangzhou Univ, Inst Artificial Intelligence, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Image quality assessment; Adversarial attack; Black-box attack;
D O I
10.1016/j.eswa.2024.125415
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The problem of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation. However, the vulnerabilities of NR-IQA models to the adversarial attacks have not been thoroughly studied for model refinement. This paper aims to investigate the potential loopholes of NR-IQA models via black-box adversarial attacks. Specifically, we first formulate the attack problem as maximizing the deviation between the estimated quality scores of original and perturbed images, while restricting the perturbed image distortions for visual quality preservation. Under such formulation, we then design a Bi-directional loss function to mislead the estimated quality scores of adversarial examples towards an opposite direction with maximum deviation. On this basis, we finally develop an efficient and effective black-box attack method for NR-IQA models based on a random search paradigm. Comprehensive experiments on three benchmark datasets show that all evaluated NR-IQA models are significantly vulnerable to the proposed attack method. After being attacked, the average change rates in terms of two well-known IQA performance metrics achieved by victim models reach 97% and 101%, respectively. In addition, our attack method also outperforms a newly introduced black-box attack approach on IQA models. We also observe that the generated perturbations are not transferable, which points out a new research direction in NR-IQA community. The source code is available at https://github.com/GZHU-DVL/AttackIQA.
引用
收藏
页数:11
相关论文
共 50 条
[21]   AKD: Using Adversarial Knowledge Distillation to Achieve Black-box Attacks [J].
Lian, Xin ;
Huang, Zhiqiu ;
Wang, Chao .
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
[22]   Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence [J].
Hong, Hanbin ;
Zhang, Xinyu ;
Wang, Binghui ;
Ba, Zhongjie ;
Hong, Yuan .
PROCEEDINGS OF THE 2024 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2024, 2024, :600-614
[23]   Data reduction for black-box adversarial attacks against deep neural networks based on side-channel attacks [J].
Zhou, Hanxun ;
Liu, Zhihui ;
Hu, Yufeng ;
Zhang, Shuo ;
Kang, Longyu ;
Feng, Yong ;
Wang, Yan ;
Guo, Wei ;
Zou, Cliff C. .
COMPUTERS & SECURITY, 2025, 153
[24]   Automatic Selection Attacks Framework for Hard Label Black-Box Models [J].
Liu, Xiaolei ;
Li, Xiaoyu ;
Zheng, Desheng ;
Bai, Jiayu ;
Peng, Yu ;
Zhang, Shibin .
IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,
[25]   Black box phase-based adversarial attacks on image classifiers [J].
Hodes, Scott G. ;
Blose, Kory J. ;
Kane, Timothy J. .
JOURNAL OF ELECTRONIC IMAGING, 2025, 34 (01)
[26]   Black box phase-based adversarial attacks on image classifiers [J].
Hodes, Scott G. ;
Blose, Kory J. ;
Kane, Timothy J. .
AUTOMATIC TARGET RECOGNITION XXXIV, 2024, 13039
[27]   Query-Efficient Black-Box Adversarial Attacks on Automatic Speech Recognition [J].
Tong, Chuxuan ;
Zheng, Xi ;
Li, Jianhua ;
Ma, Xingjun ;
Gao, Longxiang ;
Xiang, Yong .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 :3981-3992
[28]   Black-box adversarial attacks through speech distortion for speech emotion recognition [J].
Gao, Jinxing ;
Yan, Diqun ;
Dong, Mingyu .
EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2022, 2022 (01)
[29]   Efficient black-box adversarial attacks via alternate query and boundary augmentation [J].
Pi, Jiatian ;
Wen, Fusen ;
Xia, Fen ;
Jiang, Ning ;
Wu, Haiying ;
Liu, Qiao .
KNOWLEDGE-BASED SYSTEMS, 2025, 319
[30]   Black-Box Adversarial Attacks on Spiking Neural Network for Time Series Data [J].
Hutchins, Jack ;
Ferrer, Diego ;
Fillers, James ;
Schuman, Catherine .
2024 INTERNATIONAL CONFERENCE ON NEUROMORPHIC SYSTEMS, ICONS, 2024, :229-233