Interpretable Deep Learning under Fire

被引:0
|
作者
Zhang, Xinyang [1 ]
Wang, Ningfei [2 ]
Shen, Hua [1 ]
Ji, Shouling [3 ,4 ]
Luo, Xiapu [5 ]
Wang, Ting [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Univ Calif Irvine, Irvine, CA USA
[3] Zhejiang Univ, Hangzhou, Peoples R China
[4] Alibaba, ZJU Joint Inst Frontier Technol, Hangzhou, Peoples R China
[5] Hong Kong Polytech Univ, Hong Kong, Peoples R China
来源
PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM | 2020年
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Providing explanations for deep neural network (DNN) models is crucial for their use in security-sensitive domains. A plethora of interpretation models have been proposed to help users understand the inner workings of DNNs: how does a DNN arrive at a specific decision for a given input? The improved interpretability is believed to offer a sense of security by involving human in the decision-making process. Yet, due to its data-driven nature, the interpretability itself is potentially susceptible to malicious manipulations, about which little is known thus far. Here we bridge this gap by conducting the first systematic study on the security of interpretable deep learning systems (IDLSes). We show that existing IDLSes are highly vulnerable to adversarial manipulations. Specifically, we present ADV2, a new class of attacks that generate adversarial inputs not only misleading target DNNs but also deceiving their coupled interpretation models. Through empirical evaluation against four major types of IDLSes on benchmark datasets and in security-critical applications (e.g., skin cancer diagnosis), we demonstrate that with ADV2 the adversary is able to arbitrarily designate an input's prediction and interpretation. Further, with both analytical and empirical evidence, we identify the prediction-interpretation gap as one root cause of this vulnerability - a DNN and its interpretation model are often misaligned, resulting in the possibility of exploiting both models simultaneously. Finally, we explore potential countermeasures against ADV2, including leveraging its low transferability and incorporating it in an adversarial training framework. Our findings shed light on designing and operating IDLSes in a more secure and informative fashion, leading to several promising research directions.
引用
收藏
页码:1659 / 1676
页数:18
相关论文
共 50 条
  • [41] Deep PLS: A Lightweight Deep Learning Model for Interpretable and Efficient Data Analytics
    Kong, Xiangyin
    Ge, Zhiqiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8923 - 8937
  • [42] Deep Learning Algorithm for Fire Detection
    Iqbal, Muhammad
    Setianingsih, Casi
    Irawan, Budhi
    2020 10TH ELECTRICAL POWER, ELECTRONICS, COMMUNICATIONS, CONTROLS AND INFORMATICS SEMINAR (EECCIS), 2020, : 237 - 242
  • [43] Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
    Xie, Yuansheng
    Vosoughi, Soroush
    Hassanpour, Saeed
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 5067 - 5074
  • [44] An interpretable deep learning model to map land subsidence hazard
    Rahmani, Paria
    Gholami, Hamid
    Golzari, Shahram
    ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH, 2024, 31 (11) : 17372 - 17386
  • [45] Fast and Interpretable Deep Learning Pipeline for Breast Cancer Recognition
    Bonyani, Mahdi
    Yeganli, Faezeh
    Yeganli, S. Faegheh
    2022 MEDICAL TECHNOLOGIES CONGRESS (TIPTEKNO'22), 2022,
  • [46] Interpretable deep dictionary learning for sound speed profiles with uncertainties
    Hua, Xinyun
    Cheng, Lei
    Zhang, Ting
    Li, Jianlong
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2023, 153 (02): : 877 - 894
  • [47] Automated Feature Document Review via Interpretable Deep Learning
    Ye, Ming
    Chen, Yuanfan
    Zhang, Xin
    He, Jinning
    Cao, Jicheng
    Liu, Dong
    Gao, Jing
    Dai, Hailiang
    Cheng, Shengyu
    2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS, ICSE-COMPANION, 2023, : 351 - 354
  • [48] Interpretable Probabilistic Password Strength Meters via Deep Learning
    Pasquini, Dario
    Ateniese, Giuseppe
    Bernaschi, Massimo
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 502 - 522
  • [49] Interpretable deep learning models for the inference and classification of LHC data
    Ngairangbam, Vishal S.
    Spannowsky, Michael
    JOURNAL OF HIGH ENERGY PHYSICS, 2024, (05):
  • [50] Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech
    Korzekwa, Daniel
    Barra-Chicote, Roberto
    Kostek, Bozena
    Drugman, Thomas
    Lajszczak, Mateusz
    INTERSPEECH 2019, 2019, : 3890 - 3894