A Robust Approach for Securing Audio Classification Against Adversarial Attacks

被引:44
作者
Esmaeilpour, Mohammad [1 ]
Cardinal, Patrick [1 ]
Koerich, Alessandro [1 ]
机构
[1] Univ Quebec, Ecole Technol Super, Dept Software & IT Engn, Montreal, PQ H3C 1K3, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Support vector machines; Machine learning; Robustness; Perturbation methods; Predictive models; Optimization; Two dimensional displays; Spectrograms; environmental sound classification; adversarial attack; K-means plus plus; support vector machines (SVM); convolutional denoising autoencoder;
D O I
10.1109/TIFS.2019.2956591
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Adversarial audio attacks can be considered as a small perturbation unperceptive to human ears that is intentionally added to an audio signal and causes a machine learning model to make mistakes. This poses a security concern about the safety of machine learning models since the adversarial attacks can fool such models toward the wrong predictions. In this paper we first review some strong adversarial attacks that may affect both audio signals and their 2D representations and evaluate the resiliency of deep learning models and support vector machines (SVM) trained on 2D audio representations such as short time Fourier transform, discrete wavelet transform (DWT) and cross recurrent plot against several state-of-the-art adversarial attacks. Next, we propose a novel approach based on pre-processed DWT representation of audio signals and SVM to secure audio systems against adversarial attacks. The proposed architecture has several preprocessing modules for generating and enhancing spectrograms including dimension reduction and smoothing. We extract features from small patches of the spectrograms using the speeded up robust feature (SURF) algorithm which are further used to transform into cluster distance distribution using the K-Means++ algorithm. Finally, SURF-generated vectors are encoded by this codebook and the resulting codewords are used for training a SVM. All these steps yield to a novel approach for audio classification that provides a good tradeoff between accuracy and resilience. Experimental results on three environmental sound datasets show the competitive performance of the proposed approach compared to the deep neural networks both in terms of accuracy and robustness against strong adversarial attacks.
引用
收藏
页码:2147 / 2159
页数:13
相关论文
共 50 条
[31]   Deep Reinforcement Adversarial Learning Against Botnet Evasion Attacks [J].
Apruzzese, Giovanni ;
Andreolini, Mauro ;
Marchetti, Mirco ;
Venturi, Andrea ;
Colajanni, Michele .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2020, 17 (04) :1975-1987
[32]   Parameter Interpolation Adversarial Training for Robust Image Classification [J].
Liu, Xin ;
Yang, Yichen ;
He, Kun ;
Hopcroft, John E. .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 :1613-1623
[33]   Evaluating Resilience of Encrypted Traffic Classification against Adversarial Evasion Attacks [J].
Maarouf, Ramy ;
Sattar, Danish ;
Matrawy, Ashraf .
26TH IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (IEEE ISCC 2021), 2021,
[34]   Encoding Generative Adversarial Networks for Defense Against Image Classification Attacks [J].
Perez-Bravo, Jose M. ;
Rodriguez-Rodriguez, Jose A. ;
Garcia-Gonzalez, Jorge ;
Molina-Cabello, Miguel A. ;
Thurnhofer-Hemsi, Karl ;
Lopez-Rubio, Ezequiel .
BIO-INSPIRED SYSTEMS AND APPLICATIONS: FROM ROBOTICS TO AMBIENT INTELLIGENCE, PT II, 2022, 13259 :163-172
[35]   Stealthy Adversarial Attacks Against Automated Modulation Classification in Cognitive Radio [J].
Fernando, Praveen ;
Wei-Kocsis, Jin .
2023 IEEE COGNITIVE COMMUNICATIONS FOR AEROSPACE APPLICATIONS WORKSHOP, CCAAW, 2023,
[36]   Feature-aware transferable adversarial attacks against image classification [J].
Cheng, Shuyan ;
Li, Peng ;
Han, Keji ;
Xu, He .
APPLIED SOFT COMPUTING, 2024, 161
[37]   AudioGuard: Speech Recognition System Robust against Optimized Audio Adversarial Examples [J].
Kwon, Hyun .
MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (20) :57943-57962
[38]   Online Alternate Generator Against Adversarial Attacks [J].
Li, Haofeng ;
Zeng, Yirui ;
Li, Guanbin ;
Lin, Liang ;
Yu, Yizhou .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :9305-9315
[39]   Adversarial Attacks Against Binary Similarity Systems [J].
Capozzi, Gianluca ;
D'elia, Daniele Cono ;
Di Luna, Giuseppe Antonio ;
Querzoni, Leonardo .
IEEE ACCESS, 2024, 12 :161247-161269
[40]   Adversarial Learning Targeting Deep Neural Network Classification: A Comprehensive Review of Defenses Against Attacks [J].
Miller, David J. ;
Xiang, Zhen ;
Kesidis, George .
PROCEEDINGS OF THE IEEE, 2020, 108 (03) :402-433