Black-box adversarial attacks against image quality assessment models

被引:8
作者
Ran, Yu [1 ]
Zhang, Ao-Xiang [1 ]
Li, Mingjie [1 ]
Tang, Weixuan [2 ]
Wang, Yuan-Gen [2 ]
机构
[1] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
[2] Guangzhou Univ, Inst Artificial Intelligence, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Image quality assessment; Adversarial attack; Black-box attack;
D O I
10.1016/j.eswa.2024.125415
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The problem of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation. However, the vulnerabilities of NR-IQA models to the adversarial attacks have not been thoroughly studied for model refinement. This paper aims to investigate the potential loopholes of NR-IQA models via black-box adversarial attacks. Specifically, we first formulate the attack problem as maximizing the deviation between the estimated quality scores of original and perturbed images, while restricting the perturbed image distortions for visual quality preservation. Under such formulation, we then design a Bi-directional loss function to mislead the estimated quality scores of adversarial examples towards an opposite direction with maximum deviation. On this basis, we finally develop an efficient and effective black-box attack method for NR-IQA models based on a random search paradigm. Comprehensive experiments on three benchmark datasets show that all evaluated NR-IQA models are significantly vulnerable to the proposed attack method. After being attacked, the average change rates in terms of two well-known IQA performance metrics achieved by victim models reach 97% and 101%, respectively. In addition, our attack method also outperforms a newly introduced black-box attack approach on IQA models. We also observe that the generated perturbations are not transferable, which points out a new research direction in NR-IQA community. The source code is available at https://github.com/GZHU-DVL/AttackIQA.
引用
收藏
页数:11
相关论文
共 57 条
[1]   Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search [J].
Andriushchenko, Maksym ;
Croce, Francesco ;
Flammarion, Nicolas ;
Hein, Matthias .
COMPUTER VISION - ECCV 2020, PT XXIII, 2020, 12368 :484-501
[2]  
[Anonymous], 2000, Final Report from the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment
[3]   On the use of deep learning for blind image quality assessment [J].
Bianco, Simone ;
Celona, Luigi ;
Napoletano, Paolo ;
Schettini, Raimondo .
SIGNAL IMAGE AND VIDEO PROCESSING, 2018, 12 (02) :355-362
[4]   Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment [J].
Bosse, Sebastian ;
Maniry, Dominique ;
Mueller, Klaus-Robert ;
Wiegand, Thomas ;
Samek, Wojciech .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) :206-219
[5]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[6]   Cross-modal Matching CNN for Autonomous Driving Sensor Data Monitoring [J].
Chen, Yiqiang ;
Liu, Feng ;
Pei, Ke .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, :3103-3112
[7]   Poisoning QoS-aware cloud API recommender system with generative adversarial network attack [J].
Chen, Zhen ;
Bao, Taiyu ;
Qi, Wenchao ;
You, Dianlong ;
Liu, Linlin ;
Shen, Limin .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
[8]   A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile [J].
Chou, CH ;
Li, YC .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 1995, 5 (06) :467-476
[9]   Review of medical image quality assessment [J].
Chow, Li Sze ;
Paramesran, Raveendran .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2016, 27 :145-154
[10]  
Demontis A, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P321