Code Difference Guided Adversarial Example Generation for Deep Code Models

被引:12
作者
Tian, Zhao [1 ]
Chen, Junjie [1 ]
Jin, Zhi [2 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin, Peoples R China
[2] Peking Univ, Key Lab High Confidence Software Technol, Beijing, Peoples R China
来源
2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE | 2023年
基金
中国国家自然科学基金;
关键词
Adversarial Example; Code Model; Guided Testing; Code Transformation;
D O I
10.1109/ASE56229.2023.00149
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples are important to test and enhance the robustness of deep code models. As source code is discrete and has to strictly stick to complex grammar and semantics constraints, the adversarial example generation techniques in other domains are hardly applicable. Moreover, the adversarial example generation techniques specific to deep code models still suffer from unsatisfactory effectiveness due to the enormous ingredient search space. In this work, we propose a novel adversarial example generation technique (i.e., CODA) for testing deep code models. Its key idea is to use code differences between the target input (i.e., a given code snippet as the model input) and reference inputs (i.e., the inputs that have small code differences but different prediction results with the target input) to guide the generation of adversarial examples. It considers both structure differences and identifier differences to preserve the original semantics. Hence, the ingredient search space can be largely reduced as the one constituted by the two kinds of code differences, and thus the testing process can be improved by designing and guiding corresponding equivalent structure transformations and identifier renaming transformations. Our experiments on 15 deep code models demonstrate the effectiveness and efficiency of CODA, the naturalness of its generated examples, and its capability of enhancing model robustness after adversarial fine-tuning. For example, CODA reveals 88.05% and 72.51% more faults in models than the state-of-the-art techniques (i.e., CARROT and ALERT) on average, respectively.
引用
收藏
页码:850 / 862
页数:13
相关论文
共 61 条
[1]   code2vec: Learning Distributed Representations of Code [J].
Alon, Uri ;
Zilberstein, Meital ;
Levy, Omer ;
Yahav, Eran .
PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2019, 3 (POPL)
[2]  
Alon Uri, 2018, INT C LEARNING REPRE
[3]   Source Code Authorship Attribution Using Long Short-Term Memory Based Networks [J].
Alsulami, Bander ;
Dauber, Edwin ;
Harang, Richard ;
Mancoridis, Spiros ;
Greenstadt, Rachel .
COMPUTER SECURITY - ESORICS 2017, PT I, 2018, 10492 :65-82
[4]   Convolutional Neural Networks over Control Flow Graphs for Software Defect Prediction [J].
Anh Viet Phan ;
Minh Le Nguyen ;
Lam Thu Bui .
2017 IEEE 29TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2017), 2017, :45-52
[5]  
[Anonymous], Tree-sitter
[6]  
Bojanowski Piotr., 2017, Trans ACL, V5, P135, DOI [DOI 10.1162/TACL_A_00051, 10.1162/tacla00051, DOI 10.1162/TACLA00051, 10.1162/tacl_a_00051]
[7]   A Theory of Dual Channel Constraints [J].
Casalnuovo, Casey ;
Barr, Earl T. ;
Dash, Santanu Kumar ;
Devanbu, Prem ;
Morgan, Emily .
2020 IEEE/ACM 42ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: NEW IDEAS AND EMERGING RESULTS (ICSE-NIER 2020), 2020, :25-28
[8]  
Cheers H, 2019, INT CONF SOFTW ENG, P617, DOI [10.1109/icsess47205.2019.9040853, 10.1109/ICSESS47205.2019.9040853]
[9]   Generating Adversarial Source Programs Using Important Tokens-based Structural Transformations [J].
Chen, Penglong ;
Li, Zhen ;
Wen, Yu ;
Liu, Lili .
2022 26TH INTERNATIONAL CONFERENCE ON ENGINEERING OF COMPLEX COMPUTER SYSTEMS (ICECCS 2022), 2022, :173-182
[10]   UNDERSTANDING THE METROPOLIS-HASTINGS ALGORITHM [J].
CHIB, S ;
GREENBERG, E .
AMERICAN STATISTICIAN, 1995, 49 (04) :327-335