TSadv: Black-box adversarial attack on time series with local perturbations

被引:0
作者
Yang, Wenbo [1 ,2 ]
Yuan, Jidong [1 ,2 ]
Wang, Xiaokang [3 ]
Zhao, Peixiang [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
[2] Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[3] Beijing Univ Posts & Telecommun, Sch Econ & Management, Beijing 100876, Peoples R China
基金
北京市自然科学基金; 国家重点研发计划;
关键词
Black-box adversarial attack; Time series classification; Local perturbations; Differential evolution; Shapelet;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) for time series classification have potential security concerns due to their vulnerability to adversarial attacks. Previous work that perturbs time series globally requires gradient information to generate adversarial examples, leading to being perceived easily. In this paper, we propose a gradient-free black-box method called TSadv to attack DNNs with local perturbations. First, we formalize the attack as a constrained optimization problem solved by a differential evolution algorithm without any inner information of the target model. Second, with the assumption that time series shapelets provide more discriminative information between different classes, the range of perturbations is designed based on their intervals. Experimental results show that our method can effectively attack DNNs on time series datasets that have potential security concerns and generate imperceptible adversarial samples flexibly. Besides, our approach decreases the mean squared error by approximately two orders of magnitude compared with the state-of-the-art method while retaining competitive attacking success rates.
引用
收藏
页数:14
相关论文
共 45 条
  • [1] ADVoIP: Adversarial Detection of Encrypted and Concealed VoIP
    Addesso, Paolo
    Cirillo, Michele
    Di Mauro, Mario
    Matta, Vincenzo
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 943 - 958
  • [2] Baluja Shumeet, 2017, ARXIV
  • [3] Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
    Carlini, Nicholas
    Wagner, David
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 1 - 7
  • [4] Chen HX, 2020, AAAI CONF ARTIF INTE, V34, P3446
  • [5] Chen Pin-Yu, 2017, ACM WORKSHOP ARTIFIC, P15, DOI DOI 10.1145/3128572.3140448
  • [6] A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms
    Civicioglu, Pinar
    Besdok, Erkan
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2013, 39 (04) : 315 - 346
  • [7] Dang-Nhu R, 2020, PR MACH LEARN RES, V119
  • [8] Differential Evolution: A Survey of the State-of-the-Art
    Das, Swagatam
    Suganthan, Ponnuthurai Nagaratnam
    [J]. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2011, 15 (01) : 4 - 31
  • [9] Demsar J, 2006, J MACH LEARN RES, V7, P1
  • [10] Robust Physical-World Attacks on Deep Learning Visual Classification
    Eykholt, Kevin
    Evtimov, Ivan
    Fernandes, Earlence
    Li, Bo
    Rahmati, Amir
    Xiao, Chaowei
    Prakash, Atul
    Kohno, Tadayoshi
    Song, Dawn
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1625 - 1634