TEAR: Exploring Temporal Evolution of Adversarial Robustness for Membership Inference Attacks Against Federated Learning

被引:9
|
作者
Liu, Gaoyang [1 ,2 ]
Tian, Zehao [1 ]
Chen, Jian [1 ]
Wang, Chen [1 ]
Liu, Jiangchuan [2 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Hubei Key Lab Smart Internet Technol, Wuhan 430074, Peoples R China
[2] Simon Fraser Univ, Sch Comp Sci, Burnaby, BC V5A 1S6, Canada
基金
中国国家自然科学基金;
关键词
Federated learning; membership inference attack; adversarial robustness; temporal evolution;
D O I
10.1109/TIFS.2023.3303718
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is a privacy-preserving machine learning paradigm that enables multiple clients to train a unified model without disclosing their private data. However, susceptibility to membership inference attacks (MIAs) arises due to the natural inclination of FL models to overfit on the training data during the training process, thereby enabling MIAs to exploit the subtle differences in the FL model's parameters, activations, or predictions between the training and testing data to infer membership information. It is worth noting that most if not all existing MIAs against FL require access to the model's internal information or modification of the training process, yielding them unlikely to be performed in practice. In this paper, we present with TEAR the first evidence that it is possible for an honest-but-curious federated client to perform MIA against an FL system, by exploring the Temporal Evolution of the Adversarial Robustness between the training and non-training data. We design a novel adversarial example generation method to quantify the target sample's adversarial robustness, which can be utilized to obtain the membership features to train the inference model in a supervised manner. Extensive experiment results on five realistic datasets demonstrate that TEAR can achieve a strong inference performance compared with two existing MIAs, and is able to escape from the protection of two representative defenses.
引用
收藏
页码:4996 / 5010
页数:15
相关论文
共 50 条
  • [41] GAN Enhanced Membership Inference: A Passive Local Attack in Federated Learning
    Zhang, Jingwen
    Zhang, Jiale
    Chen, Junjun
    Yu, Shui
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [42] Novel Evasion Attacks Against Adversarial Training Defense for Smart Grid Federated Learning
    Bondok, Atef H.
    Mahmoud, Mohamed
    Badr, Mahmoud M.
    Fouda, Mostafa M.
    Abdallah, Mohamed
    Alsabaan, Maazen
    IEEE ACCESS, 2023, 11 : 112953 - 112972
  • [43] Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
    Xie, Chulin
    Long, Yunhui
    Chen, Pin-Yu
    Li, Qinbin
    Koyejo, Sanmi
    Li, Bo
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 1511 - 1525
  • [44] SARS: A Personalized Federated Learning Framework Towards Fairness and Robustness against Backdoor Attacks
    Zhang, Webin
    Li, Youpeng
    An, Lingling
    Wan, Bo
    Wang, Xuyu
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2024, 8 (04):
  • [45] Membership Inference Attacks Against Semantic Segmentation Models
    Chobola, Tomas
    Usynin, Dmitrii
    Kaissis, Georgios
    PROCEEDINGS OF THE 16TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2023, 2023, : 43 - 53
  • [46] Secure Aggregation Is Not Private Against Membership Inference Attacks
    Ngo, Khac-Hoang
    Ostman, Johan
    Durisi, Giuseppe
    Graell i Amat, Alexandre
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES-RESEARCH TRACK, PT VI, ECML PKDD 2024, 2024, 14946 : 180 - 198
  • [47] MiDA: Membership inference attacks against domain adaptation
    Zhang, Yuanjie
    Zhao, Lingchen
    Wang, Qian
    ISA TRANSACTIONS, 2023, 141 : 103 - 112
  • [48] A robust analysis of adversarial attacks on federated learning environments
    Nair, Akarsh K.
    Raj, Ebin Deni
    Sahoo, Jayakrushna
    COMPUTER STANDARDS & INTERFACES, 2023, 86
  • [49] Efficient Federated Matrix Factorization Against Inference Attacks
    Chai, Di
    Wang, Leye
    Chen, Kai
    Yang, Qiang
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [50] FROM GRADIENT LEAKAGE TO ADVERSARIAL ATTACKS IN FEDERATED LEARNING
    Lim, Jia Qi
    Chan, Chee Seng
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3602 - 3606