A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability?

被引:273
作者
Huang, Xiaowei [1 ]
Kroening, Daniel [2 ]
Ruan, Wenjie [3 ]
Sharp, James [4 ]
Sun, Youcheng [5 ]
Thamo, Emese [1 ]
Wu, Min [2 ]
Yi, Xinping [1 ]
机构
[1] Univ Liverpool, Liverpool, Merseyside, England
[2] Univ Oxford, Oxford, England
[3] Univ Lancaster, Lancaster, England
[4] Def Sci & Technol Lab Dstl, Porton Down Salisbury, England
[5] Queens Univ Belfast, Belfast, Antrim, North Ireland
基金
英国工程与自然科学研究理事会;
关键词
ABSTRACTION-REFINEMENT; ROBUSTNESS; EXTRACTION;
D O I
10.1016/j.cosrev.2020.100270
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks. With the broader deployment of DNNs on various applications, the concerns over their safety and trustworthiness have been raised in public, especially after the widely reported fatal incidents involving self-driving cars. Research to address these concerns is particularly active, with a significant number of papers released in the past few years. This survey paper conducts a review of the current research effort into making DNNs safe and trustworthy, by focusing on four aspects: verification, testing, adversarial attack and defence, and interpretability. In total, we survey 202 papers, most of which were published after 2017. (c) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页数:35
相关论文
共 199 条
[31]  
Carlini N, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P3, DOI 10.1145/3128572.3140444
[32]  
Chen Jianbo, 2018, P MACHINE LEARNING R, V80
[33]  
Cheng C.-H., 2018, INT S AUT TECHN VER
[34]  
Cheng C.-H., 2018, P 16 ACM IEEE INT C
[35]  
Cheng C.-H., 2018, ARXIV181106746
[36]  
Cheng C.-H., 2018, ARXIV180906573
[37]   Maximum Resilience of Artificial Neural Networks [J].
Cheng, Chih-Hong ;
Nuehrenberg, Georg ;
Ruess, Harald .
AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS (ATVA 2017), 2017, 10482 :251-268
[38]   Manifesting Bugs in Machine Learning Code: An Explorative Study with Mutation Testing [J].
Cheng, Dawei ;
Cao, Chun ;
Xu, Chang ;
Ma, Xiaoxing .
2018 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2018), 2018, :313-324
[39]   Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution [J].
Chu, Lingyang ;
Hu, Xia ;
Hu, Juhua ;
Wang, Lanjun ;
Pei, Jian .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :1244-1253
[40]   Counterexample-guided abstraction refinement for symbolic model checking [J].
Clarke, E ;
Grumberg, O ;
Jha, S ;
Lu, Y ;
Veith, H .
JOURNAL OF THE ACM, 2003, 50 (05) :752-794