Attention-guided transformation-invariant attack for black-box adversarial examples

被引:5
作者
Zhu, Jiaqi [1 ]
Dai, Feng [2 ]
Yu, Lingyun [1 ,3 ]
Xie, Hongtao [1 ]
Wang, Lidong [4 ]
Wu, Bo [5 ]
Zhang, Yongdong [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, 443 Huangshan Rd, Hefei 230027, Peoples R China
[2] Chinese Acad Sci, Key Lab Intelligent Informat Proc, Beijing, Peoples R China
[3] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
[4] Beijing Radio & TV Stn, Beijing, Peoples R China
[5] MIT IBM Watson AI Lab, Cambridge, MA USA
基金
中国国家自然科学基金;
关键词
adversarial examples; attention; media convergence; security; transformation-invariant;
D O I
10.1002/int.22808
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the development of media convergence, information acquisition is no longer limited to traditional media, such as newspapers and televisions, but more from digital media on the Internet, where media contents should be under supervision by platforms. At present, the media content analysis technology of Internet platforms relies on deep neural networks (DNNs). However, DNNs show vulnerability to adversarial examples, which results in security risks. Therefore, it is necessary to adequately study the internal mechanism of adversarial examples to build more effective supervision models. When coming to practical applications, supervision models are mostly faced with black-box attacks, where cross-model transferability of adversarial examples has attracted increasing attention. In this paper, to improve the transferability of adversarial examples, we propose an attention-guided transformation-invariant adversarial attack method, which incorporates an attention mechanism to disrupt the most distinctive features and simultaneously ensures adversarial attack invariance under different transformations. Specifically, we dynamically weight the latent features according to an attention mechanism and disrupt them accordingly. Meanwhile, considering the lack of semantics in low-level features, high-level semantics are introduced as spatial guidance to make low-level feature perturbations concentrate on the most discriminative regions. Moreover, since the attention heatmaps may vary significantly across different models, a transformation-invariant aggregated attack strategy is proposed to alleviate overfitting to the proxy model attention. Comprehensive experimental results show that the proposed method can significantly improve the transferability of adversarial examples.
引用
收藏
页码:3142 / 3165
页数:24
相关论文
共 58 条
  • [41] Real-world Anomaly Detection in Surveillance Videos
    Sultani, Waqas
    Chen, Chen
    Shah, Mubarak
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6479 - 6488
  • [42] Szegedy C., 2014, ICLR POSTER
  • [43] SZEGEDY C, 2016, PROC CVPR IEEE, P2818, DOI [10.1109/CVPR.2016.308, DOI 10.1109/CVPR.2016.308]
  • [44] Szegedy C, 2017, AAAI CONF ARTIF INTE, P4278
  • [45] Caregiver burden and family functioning in different neurological diseases
    Tramonti, Francesco
    Bonfiglio, Luca
    Bongioanni, Paolo
    Belviso, Cristina
    Fanciullacci, Chiara
    Rossi, Bruno
    Chisari, Carmelo
    Carboncini, Maria Chiara
    [J]. PSYCHOLOGY HEALTH & MEDICINE, 2019, 24 (01) : 27 - 34
  • [46] Multimodal Graph-Based Reranking for Web Image Search
    Wang, Meng
    Li, Hao
    Tao, Dacheng
    Lu, Ke
    Wu, Xindong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (11) : 4649 - 4661
  • [47] Event Driven Web Video Summarization by Tag Localization and Key-Shot Identification
    Wang, Meng
    Hong, Richang
    Li, Guangda
    Zha, Zheng-Jun
    Yan, Shuicheng
    Chua, Tat-Seng
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2012, 14 (04) : 975 - 985
  • [48] EANN: Event Adversarial Neural Networks for Multi-Modal Fake News Detection
    Wang, Yaqing
    Ma, Fenglong
    Jin, Zhiwei
    Yuan, Ye
    Xun, Guangxu
    Jha, Kishlay
    Su, Lu
    Gao, Jing
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 849 - 857
  • [49] Boosting the Transferability of Adversarial Samples via Attention
    Wu, Weibin
    Su, Yuxin
    Chen, Xixian
    Zhao, Shenglin
    King, Irwin
    Lyu, Michael R.
    Tai, Yu-Wing
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1158 - 1167
  • [50] Xie C., 2018, P 6 INT C LEARNING R