Adversarial Attack on Hyperdimensional Computing-based NLP Applications

被引:0
作者
Zhang, Sizhe [1 ]
Wang, Zhao [2 ]
Jiao, Xun [1 ]
机构
[1] Villanova Univ, Villanova, PA 19085 USA
[2] Univ Chicago, Chicago, IL 60637 USA
来源
2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE | 2023年
关键词
D O I
10.23919/DATE56975.2023.10137289
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The security and robustness of machine learning algorithms have become increasingly important as they are used in critical applications such as natural language processing (NLP), e.g., text-based spam detection. Recently, the emerging brain-inspired hyperdimensional computing (HDC), compared to deep learning methods, has shown advantages such as compact model size, energy efficiency, and capability of few-shot learning in various NLP applications. While HDC has been demonstrated to be vulnerable to adversarial attacks in image and audio input, there is currently very limited study on its adversarial security to NLP tasks, which is arguable one of the most suitable applications for HDC. In this paper, we present a novel study on the adversarial attack of HDC-based NLP applications. By leveraging the unique properties in HDC, the similarity-based inference, we propose similarity-guided approaches to automatically generate adversarial text samples for HDC. Our approach is able to achieve up to 89% attack success rate. More importantly, by comparing with unguided brute-force approach, similarity-guided attack achieves a speedup of 2.4X in generating adversarial samples. Our work opens up new directions and challenges for future adversarially-robust HDC model design and optimization.
引用
收藏
页数:6
相关论文
共 25 条
[1]   TubeSpam: Comment Spam Filtering on YouTube [J].
Alberto, Tulio C. ;
Lochter, Johannes V. ;
Almeida, Tiago A. .
2015 IEEE 14TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2015, :138-143
[2]  
Alzantot M, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P2890
[3]  
Bird S, 2009, Natural language processing with Python, V479, pxx
[4]   Adversarial Attacks on Voice Recognition Based on Hyper Dimensional Computing [J].
Chen, Wencheng ;
Li, Hongyu .
JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2021, 93 (07) :709-718
[5]   Survey of review spam detection using machine learning techniques [J].
Crawford M. ;
Khoshgoftaar T.M. ;
Prusa J.D. ;
Richter A.N. ;
Al Najada H. .
Journal of Big Data, 2 (1)
[6]  
Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
[7]   Robust Physical-World Attacks on Deep Learning Visual Classification [J].
Eykholt, Kevin ;
Evtimov, Ivan ;
Fernandes, Earlence ;
Li, Bo ;
Rahmati, Amir ;
Xiao, Chaowei ;
Prakash, Atul ;
Kohno, Tadayoshi ;
Song, Dawn .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1625-1634
[8]   Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers [J].
Gao, Ji ;
Lanchantin, Jack ;
Soffa, Mary Lou ;
Qi, Yanjun .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, :50-56
[9]  
Ghazarian S, 2020, AAAI CONF ARTIF INTE, V34, P7789
[10]   On the Validity of a New SMS Spam Collection [J].
Gomez Hidalgo, Jose Maria ;
Almeida, Tiago A. ;
Yamakami, Akebo .
2012 11TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2012), VOL 2, 2012, :240-245