Do Backdoors Assist Membership Inference Attacks?

被引:0
作者
Goto, Yumeki [1 ]
Ashizawa, Nami [2 ]
Shibahara, Toshiki [2 ]
Yanai, Naoto [1 ]
机构
[1] Osaka Univ, I-5 Yamadaoka,Suita Shi, Osaka 5650871, Japan
[2] NTT Social Informat Labs, 3-9-11 Midori Cho,Musashino Shi, Tokyo 1808585, Japan
来源
SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, PT II, SECURECOMM 2023 | 2025年 / 568卷
关键词
Backdoor-assisted membership inference attack; backdoor attack; poisoning attack; membership inference attack;
D O I
10.1007/978-3-031-64954-7_13
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
When an adversary provides poison samples to a machine learning model, privacy leakage, such as membership inference attacks that infer whether a sample was included in the training of the model, becomes effective by moving the sample to an outlier. However, the attacks can be detected because inference accuracy deteriorates due to poison samples. In this paper, we discuss a backdoor-assisted membership inference attack, a novel membership inference attack based on backdoors that return the adversary's expected output for a triggered sample. We found three key insights through experiments with an academic benchmark dataset. We first demonstrate that the backdoor-assisted membership inference attack is unsuccessful when backdoors are trivially used. Second, when we analyzed latent representations to understand the unsuccessful results, we found that backdoor attacks make any clean sample an inlier in contrast to poisoning attacks which make it an outlier. Finally, our promising results also show that backdoor-assisted membership inference attacks may still be possible only when backdoors whose triggers are imperceptible are used in some specific setting.
引用
收藏
页码:251 / 265
页数:15
相关论文
共 50 条
  • [41] Balancing Privacy and Attack Utility: Calibrating Sample Difficulty for Membership Inference Attacks in Transfer Learning
    Liu, Shuwen
    Qian, Yongfeng
    Hao, Yixue
    [J]. 2024 54TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS-SUPPLEMENTAL VOLUME, DSN-S 2024, 2024, : 159 - 160
  • [42] Membership inference attacks via spatial projection-based relative information loss in MLaaS
    Ding, Zehua
    Tian, Youliang
    Wang, Guorong
    Xiong, Jinbo
    Tang, Jinchuan
    Ma, Jianfeng
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2025, 62 (01)
  • [43] TEAR: Exploring Temporal Evolution of Adversarial Robustness for Membership Inference Attacks Against Federated Learning
    Liu, Gaoyang
    Tian, Zehao
    Chen, Jian
    Wang, Chen
    Liu, Jiangchuan
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4996 - 5010
  • [44] LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks
    Ma, Mengyao
    Zhang, Yanjun
    Chamikara, M. A. P.
    Zhang, Leo Yu
    Chhetri, Mohan Baruwal
    Bai, Guangdong
    [J]. PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 122 - 135
  • [45] Re-ID-leak: Membership Inference Attacks Against Person Re-identification
    Gao, Junyao
    Jiang, Xinyang
    Dou, Shuguang
    Li, Dongsheng
    Miao, Duoqian
    Zhao, Cairong
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (10) : 4673 - 4687
  • [46] GradDiff: Gradient-based membership inference attacks against federated distillation with differential comparison
    Wang, Xiaodong
    Wu, Longfei
    Guan, Zhitao
    [J]. INFORMATION SCIENCES, 2024, 658
  • [47] PAR-GAN: Improving the Generalization of Generative Adversarial Networks Against Membership Inference Attacks
    Chen, Junjie
    Wang, Wendy Hui
    Gao, Hongchang
    Shi, Xinghua
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 127 - 137
  • [48] DASTAN-CNN: RF Fingerprinting for the Mitigation of Membership Inference Attacks in 5G
    Khowaja, Sunder Ali
    Khuwaja, Parus
    Dev, Kapal
    Antonopoulos, Angelos
    Magarini, Maurizio
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5524 - 5529
  • [49] Securing Deep Neural Networks on Edge from Membership Inference Attacks Using Trusted Execution Environments
    Yang, Cheng-Yun
    Ramshankar, Gowri
    Eliopoulos, Nicholas
    Jajal, Purvish
    Nambiar, Sudarshan
    Miller, Evan
    Zhang, Xun
    Tian, Dave
    Chen, Shuo-Han
    Perng, Chiy-Ferng
    Lu, Yung-Hsiang
    [J]. PROCEEDINGS OF THE 29TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED 2024, 2024,
  • [50] Demystifying the Membership Inference Attack
    Irolla, Paul
    Chatel, Gregory
    [J]. 2019 12TH CMI CONFERENCE ON CYBERSECURITY AND PRIVACY (CMI), 2019, : 1 - 7