Do Backdoors Assist Membership Inference Attacks?

被引:0
|
作者
Goto, Yumeki [1 ]
Ashizawa, Nami [2 ]
Shibahara, Toshiki [2 ]
Yanai, Naoto [1 ]
机构
[1] Osaka Univ, I-5 Yamadaoka,Suita Shi, Osaka 5650871, Japan
[2] NTT Social Informat Labs, 3-9-11 Midori Cho,Musashino Shi, Tokyo 1808585, Japan
来源
SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, PT II, SECURECOMM 2023 | 2025年 / 568卷
关键词
Backdoor-assisted membership inference attack; backdoor attack; poisoning attack; membership inference attack;
D O I
10.1007/978-3-031-64954-7_13
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
When an adversary provides poison samples to a machine learning model, privacy leakage, such as membership inference attacks that infer whether a sample was included in the training of the model, becomes effective by moving the sample to an outlier. However, the attacks can be detected because inference accuracy deteriorates due to poison samples. In this paper, we discuss a backdoor-assisted membership inference attack, a novel membership inference attack based on backdoors that return the adversary's expected output for a triggered sample. We found three key insights through experiments with an academic benchmark dataset. We first demonstrate that the backdoor-assisted membership inference attack is unsuccessful when backdoors are trivially used. Second, when we analyzed latent representations to understand the unsuccessful results, we found that backdoor attacks make any clean sample an inlier in contrast to poisoning attacks which make it an outlier. Finally, our promising results also show that backdoor-assisted membership inference attacks may still be possible only when backdoors whose triggers are imperceptible are used in some specific setting.
引用
收藏
页码:251 / 265
页数:15
相关论文
共 50 条
  • [1] Membership Inference Attacks: Analysis and Mitigation
    Shuvo, Md Shamimur Rahman
    Alhadidi, Dima
    2020 IEEE 19TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2020), 2020, : 1411 - 1420
  • [2] Membership Inference Attacks Against the Graph Classification
    Yang, Junze
    Li, Hongwei
    Fan, Wenshu
    Zhang, Xilin
    Hao, Meng
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6729 - 6734
  • [3] Membership inference attacks against compression models
    Jin, Yong
    Lou, Weidong
    Gao, Yanghua
    COMPUTING, 2023, 105 (11) : 2419 - 2442
  • [4] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    COMPUTERS & SECURITY, 2024, 136
  • [5] Membership Inference Attacks Against Recommender Systems
    Zhang, Minxing
    Ren, Zhaochun
    Wang, Zihan
    Ren, Pengjie
    Chen, Zhumin
    Hu, Pengfei
    Zhang, Yang
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 864 - 879
  • [6] Membership inference attacks against compression models
    Yong Jin
    Weidong Lou
    Yanghua Gao
    Computing, 2023, 105 : 2419 - 2442
  • [7] Membership Inference Attacks against Diffusion Models
    Matsumoto, Tomoya
    Miura, Takayuki
    Yanai, Naoto
    2023 IEEE SECURITY AND PRIVACY WORKSHOPS, SPW, 2023, : 77 - 83
  • [8] Rethinking Membership Inference Attacks Against Transfer Learning
    Wu, Cong
    Chen, Jing
    Fang, Qianru
    He, Kun
    Zhao, Ziming
    Ren, Hao
    Xu, Guowen
    Liu, Yang
    Xiang, Yang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6441 - 6454
  • [9] Membership Inference Attacks Against Semantic Segmentation Models
    Chobola, Tomas
    Usynin, Dmitrii
    Kaissis, Georgios
    PROCEEDINGS OF THE 16TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2023, 2023, : 43 - 53
  • [10] Understanding Disparate Effects of Membership Inference Attacks and Their Countermeasures
    Zhong, Da
    Sun, Haipei
    Xu, Jun
    Gong, Neil
    Wang, Wendy Hui
    ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2022, : 959 - 974