共 50 条
- [33] AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems 32ND ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2016), 2016, : 508 - 519
- [34] Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach PROCEEDINGS OF THE 33RD USENIX SECURITY SYMPOSIUM, SECURITY 2024, 2024, : 325 - 342
- [35] Defending against Membership Inference Attacks in Federated learning via Adversarial Example 2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 153 - 160
- [36] LDP-Purifier: Defending against Poisoning Attacks in Local Differential Privacy DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT IV, 2024, 14853 : 221 - 231
- [37] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640
- [38] Perception Poisoning Attacks in Federated Learning 2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 146 - 155
- [40] Mitigating Poisoning Attacks in Federated Learning INNOVATIVE DATA COMMUNICATION TECHNOLOGIES AND APPLICATION, ICIDCA 2021, 2022, 96 : 687 - 699